The present disclosure relates to a display method of an image and a display method of an electronic device, and more particularly to a display method of an image and a display method of an electronic device of the augmented reality technology.
The augmented reality (AR) display technology has been widely used in various fields. This technology can display images with the scenes in real environment. However, in current image display methods, the image and the scene of the real environment cannot be effectively superimposed when the user is viewing, resulting in blurred image or blurred scene of the real environment. Therefore, there is a need to provide an improved scheme for the display method of augmented reality.
An embodiment of the present disclosure provides a display method of an electronic device including following steps. Providing a first image by a display, and a first vergence plane of two eyes of a user is located at a first position when the user views the first image. Providing a second image by the display, and the first image, the second image and an environmental scene are located within a field of view of the user, and a second vergence plane of the two eyes of the user is located at a second position when the user views the second image. A first distance exists between the first position of the first vergence plane and the user, a second distance exists between the second position of the second vergence plane and the user, and the first distance is different from the second distance.
An embodiment of the present disclosure provides a display method of an image, which comprises the following steps. Providing a display, the display provides a first image, and the first image and an environmental scene are located within a field of view of a user. A first vergence plane of two eyes of the user is located at a first position when the user views the first image, and a second vergence plane of the two eyes is located at a second position when the user views an object in the environmental scene. A first distance exists between the first position of the first vergence plane and the user, a second distance exists between the second position of the second vergence plane and the user, and the first distance and the second distance satisfy a first relation: Dn<D1<Df, wherein D1 represents the first distance, Dn=D2+Δn, and Df=D2+Δf, wherein D2 represents the second distance, Δn=(De/2)*{tan[tan−1(2*D2/De)−δ]}−D2, and Δf=(De/2)*{tan[tan−1(2*D2/De)+δ]}−D2, and wherein De represents a distance between the two eyes, δ represents an eye angular resolution of the two eyes, and δ=0.02 degrees.
These and other objectives of the present disclosure will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the embodiment that is illustrated in the various figures and drawings.
The present disclosure may be understood by reference to the following detailed description, taken in conjunction with the drawings as described below. It is noted that, for purposes of illustrative clarity and being easily understood by the readers, various drawings of this disclosure show a portion of the electronic device, and certain components in various drawings may not be drawn to scale. In addition, the number and dimension of each element shown in drawings are only illustrative and are not intended to limit the scope of the present disclosure.
Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will understand, electronic equipment manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”.
In addition, when an element is referred to as being “directly on”, “directly disposed on”, “directly connected to”, or “directly coupled to” another element or layer, there are no intervening elements or layers presented.
The electrical connection may be direct connection or indirect connection. When two elements are electrically connected, the electrical signals may be transmitted by direct contact, and there are no other elements presented between the two elements. When two elements are electrically connected, the electrical signals may be transmitted through the intermediate element bridging the two elements. The term “electrically connecting” may also be referred to as “coupling”.
Although terms such as first, second, third, etc., may be used to describe diverse constituent elements, such constituent elements are not limited by the terms. The terms are used only to discriminate a constituent element from other constituent elements in the specification. The claims may not use the same terms, but instead may use the terms first, second, third, etc. with respect to the order in which an element is claimed. Accordingly, in the following description, a first constituent element may be a second constituent element in a claim.
It should be noted that the technical features in different embodiments described in the following can be replaced, recombined, or mixed with one another to constitute another embodiment without departing from the spirit of the present disclosure.
The electronic device of the present disclosure may include a display device, but not limited herein. The display device may include a touch display, a curved display or a free shape display, but not limited herein. The display device may be a bendable or flexible display device. The display device may include light-emitting diodes, liquid crystal, fluorescence, phosphors, other suitable display media or combinations of the above, but not limited herein. The light-emitting diodes may, for example, include organic light-emitting diodes (OLEDs), inorganic light-emitting diodes (LEDs), mini-light-emitting diodes (mini LEDs, millimeter sized LEDs), micro-light-emitting diodes (micro-LEDs, micrometer sized LEDs), quantum dots (QDs) light-emitting diodes (e.g. QLEDs or QDLEDs), other suitable light-emitting diodes or any combination of the above, but not limited herein. The concept or principle of the present disclosure may also be applied to non-self-emissive liquid crystal display (LCD), but not limited herein.
The display device may be any combination of the devices describe above, but not limited herein. In addition, the appearance of the display device may be rectangular, circular, polygonal, a shape with curved edges or other suitable shapes. The electronic device may have external systems such as a driving system, a control system, a light source system, a shelf system, etc. to support a display device.
Please refer to
As shown in
The display 100 may provide an image. The image may be projected onto a glass 112 (such as a windshield) through the optical element 104, the image may form an image V1 (also referred to as a first image) on a virtual image plane Pi outside the glass 112, and two eyes of a user (such as an eye 1021 and an eye 1023) may view the image V1 through the glass 112. Therefore, the image V1 outside the glass 112 and an environmental scene may be located within a field of view (FOV) of the user. In addition, the display 100 may provide a left-eye image for the eye 1021 and a right-eye image for the eye 1023, and an offset exists between the left-eye image and the right-eye image, so that the image V1 finally viewed by the user is a three-dimensional image, but not limited herein. In addition, the electronic device 10 of this embodiment may project single image V1 onto one virtual image plane Pi, but not limited herein.
Please refer to
In the example (ii), an image Vx provided by the conventional augmented reality system may be located on the virtual image plane Pi. The sight lines of the two eyes converge on the virtual image plane Pi, and the single eye also focuses on the virtual image plane Pi when the user views the image Vx. In other words, in the example (ii), the positions of the vergence plane Pvx of the two eyes, the accommodation plane Pax of the single eye and the virtual image plane Pi of the image Vx are the same. However, in the example (ii), as shown by a dashed frame Y, the first object in the environmental scene within the dashed frame Y viewed by the user is blurred while the user is viewing the image Vx.
However, when the image Vx provided by the augmented reality system is used to mark the first object in the environmental scene, due to the difference in the positions of the vergence planes of the two eyes, the user may not clearly view the image Vx of the augmented reality system and the first object in the environmental scene at the same time, or the user may feel uncomfortable. Therefore, in the display method of the image of the present disclosure, the drawbacks exist in the conventional augmented reality system may be lessened by adjusting the position of the vergence plane of the two eyes when the user views the augmented reality image V1.
As shown in the example (iii) of
In addition, please refer to
The distance D1 and the distance D2 may satisfy a first relation: Dn<D1<Df when the distance D1 and the distance D2 fall within the range R1. The lower limit distance Dn=D2+Δn, and the lower limit distance Dn (as shown in
The method of adjusting the position of the vergence plane Pv1 of the two eyes when the user is viewing the augmented reality image V1 in the present disclosure will be described in the following. Please refer to
In an example of
In another example of
Please refer to
On the other hand, as shown in the example (iii) in
Please refer to
When the distance D1 and the distance D3 fall within the range R2, the distance D1 and the distance D3 may satisfy a second relation as below:
D3+(D1/1.3052−0.2657*D1)<Δd1<D3−(D1/1.1286+0.442*D1)
A distance difference Δd1 exists between the distance D1 and the distance D3. The uncomfortable feeling of the user caused by the vergence-accommodation conflict may be mitigated when the distance difference Δd1 satisfies the second relation. In addition, in some embodiments, the second relation may be: 0≤Δd1<D3−(D1/1.1286+0.442*D1).
The display 100 may include light emitting diodes, liquid crystal, fluorescence, phosphors, other suitable display media or combinations of the above, but not limited herein. The light emitting diodes may include, for example, organic light-emitting diodes (OLEDs), inorganic light-emitting diodes (LEDs), mini-light-emitting diodes (Mini LEDs), micro-light-emitting diodes (micro-LEDs), quantum dots (QDs) light-emitting diodes (such as QLEDs or QDLEDs), other suitable materials or any combinations of the above, but not limited herein. The display 100 may also be a bendable or flexible electronic device. In addition, as shown in
The optical element 104 may include a mirror, a lens or combinations of the above, but not limited herein. The optical element 104 may include an image surface shift system, but not limited herein. The image surface shift system may include a projector system, a light filed technology element, a folding light path element, or combinations of the above, but note limited herein. The projection system may include a lens projector, a mirror or combinations of the above, but not limited herein. The light filed technology element may include a holographic optical element (HOE), an integral image element or combinations of the above, but not limited herein. The folding light path element may include a multi-mirror and space element, but not limited herein.
The glass 112 may include a windshield, but not limited herein. The glass 112 may be wedge type, flat type, curve type or combinations of the above, but not limited herein. A thin film may also be disposed on the glass 112, but not limited herein.
The sensing element 106 may include an eye tracking sensor, a head tracking sensor, a feature tracking sensor or combinations of the above, but not limited herein.
The sensing element 110 may include an environment sensor, but not limited herein. The sensing element 110 may include a camera, a light field camera, a structure light camera, a feature detector, a lidar, a radar or combinations of the above, but not limited herein.
The controller 108 may include programmable programming to execute algorithm processing, which may include, for example, a central processing unit (CPU), a system on chip (SoC), an application specific integrated circuit (ASIC), etc., but not limited herein. For example, the controller 108 may receive the information obtained by the sensing element 106 and the sensing element 110, such as the street views, pedestrians, the eye information of the user, etc. Based on this information, the controller 108 may obtain the image information for the display 100 through the algorithm calculation. The controller 108 may transmit the display data including the image information to the display 100, and the display 100 may provide the image V1 according to the display data, thereby realizing the display method of the image of the present disclosure.
Other embodiments of the present disclosure will be disclosed in the following. In order to simplify the illustration, the same elements in the following would be labeled with the same symbol. For clearly showing the differences between various embodiments, the differences between different embodiments are described in detail below, and repeated features will not be described redundantly. In addition, theses repeated features may be applied in various embodiments in the following.
Please refer to
As shown in
For making the user clearly view the image V2 and the second object in the environmental scene at the same time, or for effectively reducing user's uncomfortable feeling, the distance D4 and the distance D5 may also fall within the range R1 in
In the third relation, The lower limit distance Ds=D5+Δs, and the lower limit distance Ds (as shown in
Therefore, when the environmental scene includes the first object and the second object, and the distance between the first object and the user is different from the distance between the second object and the user, the electronic device 10 may provide the image V1 corresponding to the first object and the image V2 corresponding to the second object. In addition, the user may clearly view the image V1 and the first object in the environmental scene at the same time, or clearly view the image V2 and the second object in the environmental scene at the same time by adjusting the positions of the vergence planes of the two eyes when the user is viewing the augmented reality image V1 and the augmented reality image V2 through the display method of the image of this embodiment.
On the other hand, one eye (such as the eye 1021 or the eye 1023) of the user focuses on an accommodation plane Pa2 (also referred to as a second accommodation plane) when the one eye of the user views the image V2, and a seventh position of the accommodation plane Pa2 is the same as the position of the image V2. Since the position of the image V2 is different from the position of the image V1 (as shown in
For mitigating the discomfort of the user caused by the vergence-accommodation conflict, the distance D4 and the distance D6 may also fall within the range R2 in
D6+(D4/1.3052−0.2657*D4)<Δd2<D6−(D4/1.1286+0.442*D4)
A distance difference Δd2 exists between the distance D4 and the distance D6. In addition, in some embodiments, the fourth relation may be: 0≤Δd2<D6−(D4/1.1286+0.442*D4).
In some embodiments, the image V1 and the imager V2 may be displayed by different regions of the display 100. As shown in
Please refer to
Please refer to
Different from the first embodiment (as shown in the example (iii) of
The method of adjusting the accommodation position of one eye in this embodiment will be described in the following. Please refer to
In the example (i) of
Furthermore, the light beam Lb3, the light beam Lb4 and the light beam Lb5 emitted by the sub-pixel Px3, the sub-pixel Px4 and the sub-pixel Px5 may enter a pupil 114 of the eye 1021 in different view directions. In other words, the eye 1021 may view the light beam Lb3, the light beam Lb4 and the light beam Lb5 emitted by different sub-pixels at the same time. Based on the above principle, each light beam may respectively represent a picture, each picture may be displayed by one or a plurality of corresponding sub-pixels, and different pictures may be displayed by different sub-pixels. For example, an image provided by the display 100 may include the pictures represented by the light beam Lb1 to the light beam Lb7 at the same time, and the eye 1021 may view the pictures represented by the light beam Lb3, the light beam Lb4 and the light beam Lb5 at the same time. Offsets included between the pictures represented by the light beam Lb3, the light beam Lb4 and the light beam Lb5 in the same image may be generated by displaying the pictures represented by the light beam Lb3, the light beam Lb4 and the light beam Lb5 by different sub-pixels, thereby making the eye 1021 focus on the accommodation point Ap1.
In the example (ii) of
Furthermore, the light beam Lb8, the light beam Lb9 and the light beam Lb10 emitted by the sub-pixel Px8, the sub-pixel Px9 and the sub-pixel Px10 may enter the pupil 114 of the eye 1021 in different view directions. In other words, the eye 1021 may view the light beam Lb8, the light beam Lb9 and the light beam Lb10 emitted by different sub-pixels at the same time. For example, an image provided by the display 100 may include the pictures represented by the light beam Lb8 to the light beam Lb14 at the same time, and the eye 1021 may view the pictures represented by the light beam Lb8, the light beam Lb9 and the light beam Lb10 at the same time. Offsets included between the pictures represented by the light beam Lb8, the light beam Lb9 and the light beam Lb10 in the same image may be generated by displaying the pictures represented by the light beam Lb8, the light beam Lb9 and the light beam Lb10 by different sub-pixels, thereby making the eye 1021 focus on the accommodation point Ap2.
As shown in
In addition, taking
In the display method of the image of the present disclosure, the position of the vergence plane of the two eyes may be different from the position of the image by adjusting the position of the vergence plane. The first distance exists between the vergence plane of two eyes of the user and the user, and the third distance exists between the accommodation plane of one eye and the user when the user views the augmented reality image. The second distance exists between the vergence plane of two eyes and the user when the user views the object in the environmental scene. The user may clearly view the augmented reality image and the object in the environmental scene at the same time, or user's uncomfortable feeling may be effectively reduced by controlling the first distance and the second distance within the range R1 of
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the disclosure. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202110657102.0 | Jun 2021 | CN | national |
This application is a continuation application of U.S. application Ser. No. 17/495,829, filed on Oct. 7, 2021, which claims the benefit of U.S. Provisional Application No. 63/109,873, filed on Nov. 5, 2020. The contents of these applications are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
20200150431 | Kusafuka | May 2020 | A1 |
20200301504 | Williams | Sep 2020 | A1 |
20220197052 | Makinen | Jun 2022 | A1 |
Number | Date | Country | |
---|---|---|---|
20220309968 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
63109873 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17495829 | Oct 2021 | US |
Child | 17838254 | US |