This application claims the priority from Korean Patent Application No. 10-2017-0094972, filed on Jul. 26, 2017, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
Example embodiments of the present disclosure relate to display apparatuses, and more particularly, to head-up display apparatuses and operating methods thereof.
With the start of the automotive electronic component business, interest in head-up displays that more effectively provide various information to a driver has constantly increased. Various head-up displays have been developed and commercialized, and also, automakers have released vehicles including built-in head-up displays.
Head-up displays may be divided into displays using a combiner and displays directly using a windshield. An image to be displayed may be an object image or a 3D image. According to the current technological level, a widely used method for head-up displays is a floating method in which a 2D image is floated above a dashboard by using a mirror or a 2D image is directly projected on a dashboard.
However, as a user's level of expectation increases with technological advances, demands for larger images that overlap frontal objects have increased. To address this request, studies for projecting a 3D image in front of a user have been conducted.
Example embodiments provide head-up display apparatuses configured to provide a plurality of object images of which depth information is sequentially changed and operating methods of the same.
Example embodiments provide head-up display apparatuses configured to provide images to a user by matching an object in a real environment with the object images.
According to an aspect of an example embodiment there is provided a head-up display apparatus including a spatial light modulator configured to simultaneously output a plurality of object images to different regions from each other, a depth generation member configured to generate depth information with respect to the plurality of object images using an optical characteristic to sequentially change depth information of at least two of the object images from among the plurality of object images in a direction perpendicular to a viewing angle, and an image converging member configured to converge the plurality of object images having the depth information and a reality environment on a single region by changing at least one of an optical path of the plurality of object images having the depth information and an optical path of the reality environment.
The depth generation member may generate depth information of the plurality of object images to increase the depth information of the plurality of object images from a lower region to an upper region of a viewing angle.
The depth generation member may generate depth information with respect to the plurality of object images to be provided in a horizontal direction of the viewing angle, wherein the plurality of object images have same depth information.
The depth generation member may generate depth information with respect to the plurality of object images to change the depth information in units of plurality of object images.
The optical characteristic may include at least one of refraction, diffraction, reflection, and scattering of light.
The optical characteristic of the depth generation member may change corresponding to regions of the depth generation member.
The optical characteristic of the depth generation member may be changed in a direction corresponding to a vertical direction of the viewing angle.
The depth generation member may include a first region that generates first depth information by using a first optical characteristic, and a second region that generates second depth information different from the first depth information by using a second optical characteristic different from the first optical characteristic.
The first and second regions may be arranged in a direction corresponding to the vertical direction of the viewing angle.
The type of the first optical characteristic and the second optical characteristic may be the same, and intensities of the first optical characteristic and second characteristic may be different from each other.
The depth generation member may include at least one of an aspheric lens, an aspheric mirror, a lenticular lens, a cylindrical lens, a nano-pattern, and a meta material.
The depth generation member may control sizes of the plurality of object images based on the depth information of the plurality of object images.
The sizes of the plurality of object images may be inversely proportional to the depth information of the plurality of object images.
The image converging member may include one of a beam splitter and a transflective film.
The image converging member may include a first region, and a second region having a curved interface which is in contact with the first region.
According to an aspect of an example embodiment, there is provided an operating method of a head-up display apparatus, the operating method including simultaneously outputting a plurality of object images to different regions from each other, generating, by using an optical characteristic, depth information with respect to the plurality of object images to sequentially change depth information of at least two of the object images from among the plurality of object images, and converging the plurality of object images having depth information and the reality environment into a single region by changing at least one of an optical path of the plurality of object images having the depth information and an optical path of the reality environment.
The generating of the depth information may include generating depth information with respect to the plurality of object images to change the depth information in a vertical direction of a viewing angle.
The generating of the depth information may include generating depth information with respect to the plurality of object images to increase the depth information from a lower region to an upper region of the viewing angle.
The depth information may be changed in units of plurality of object images.
The optical characteristic may include at least one of refraction, diffraction, reflection, and scattering of light.
The above and/or other aspects will become apparent and more readily appreciated from the following description of the example embodiments, taken in conjunction with the accompanying drawings in which:
Head-up display apparatuses and operating methods thereof will now be described in detail with reference to the accompanying drawings. In the drawings, the widths and thicknesses of layers or regions are exaggerated for clarity and convenience of explanation. Also, like reference numerals refer to like elements throughout the detailed description.
As used in the present detailed description, the terms “comprise”, “include”, and variants thereof should be construed as being non-limiting with regard to various constituent elements and operations described in the specification such that recitations of portions of constituent elements or operations of the various constituent elements and various operations do not exclude other additional constituent elements and operations that may be useful in the head-up display apparatus and operating method thereof.
It will be understood that when an element or layer is referred to as being “on,” another element or layer may include an element or a layer that is directly and indirectly on/below and left/right sides of the other element or layer.
It will be understood that, although the terms “first”, “second”, etc. may be used herein to describe various elements, the elements should not be limited by these terms. These terms are only used to distinguish one element from another element.
Referring to
The spatial light modulator 110 of the head-up display apparatus 100 may simultaneously output a plurality of object images to different regions (S11), the depth generation member 120 may generate depth information to the object images so that at least some of the object images have sequentially changing depth information by using an optical characteristic (S12), and the image converging member 130 may converge the object images having depth information and a reality environment on a single region by changing at least one of an optical path of the object images having depth information and an optical path of the reality environment (S13).
The spatial light modulator 110 may output an image in units of frames. The image may be a two-dimensional (2D) image or a three-dimensional (3D) image. The 3D image may be, for example, a hologram image, a stereo image, a light field image, or an integral photography (IP) image. The image may include a plurality of partial images (hereinafter, ‘object images’) that shows an object. The object images may be outputted from different regions of the spatial light modulator 110. Thus, when the spatial light modulator 110 outputs an image frame by frame, the plurality of the object images may be simultaneously outputted to different regions. The object images may be 2D partial images or 3D partial images according to the type of the image.
The spatial light modulator 110 may be a spatial light amplitude modulator, a spatial light phase modulator, or a spatial light complex modulator that modulates both an amplitude and a phase. The spatial light modulator 110 may be a transmissive light modulator, a reflective modulator, or a transflective light modulator. For example, the spatial light modulator 110 may include a liquid crystal on silicon (LCoS) panel, a liquid crystal display (LCD) panel, a digital light projection (DLP) panel, an organic light emitting diode (OLED) panel, and a micro-organic light emitting diode (M-OLED) panel. The DLP may include a digital micromirror device (DMD).
The depth generation member 120 may generate depth information with respect to the object images so that at least some of the object images have sequentially changing depth information by using an optical characteristic. The optical characteristic may be at least one of reflection, scattering, refraction, and diffraction. The depth generation member 120 may generate depth information with respect to the object images by using regions or sub-members having different optical characteristics.
If the object images are 2D images, the depth generation member 120 may generate new depth information regarding the 2D images. If the object images are 3D images, the depth generation member 120 may change existing depth information by adding new depth information to the existing depth information.
The depth generation member 120 may generate depth information with respect to the object images so that the depth information is sequentially changed in a direction perpendicular to a viewing angle. In
The image converging member 130 may converge a plurality of object images having depth information and a reality environment on a single region by changing at least one of an optical path L1 of the object images having depth information and an optical path L2 of the reality environment. The single region may be an ocular organ of a user, that is, an eye. The image converging member 130 may transmit a plurality of lights according to the plural optical paths L1 and L2 to a pupil of a user. For example, the image converging member 130 may transmit and guide light corresponding to a plurality of object images having depth information of the first optical path L1 and external light corresponding to a reality environment of the second optical path L2 to an ocular organ 10 of the user.
Light of the first optical path L1 may be light reflected by the image converging member 130, light of the second optical path L2 may be light passed through the image converging member 130. The image converging member 130 may be a transflective member having a combined characteristic of light transmission and light reflection. For example, the image converging member 130 may include a beam splitter or a transflective film. In
The plurality of object images having depth information transmitted by light of the first optical path L1 may be object images formed and provided by the head-up display apparatus 100. The object images having depth information may include virtual reality or virtual information as a ‘display image’. A reality environment transmitted by light of the second optical path L2 may be an environment surrounding a user through the head-up display apparatus 100. The reality environment may include a front view in front of a user and may include a background of the user. Accordingly, the head-up display apparatus 100 according to an example embodiment may be applied to a method of realizing an augmented reality (AR) or a mixed reality (MR). In particular, when the head-up display apparatus 100 is applied to a vehicle, the reality environment may include, for example, roads. When the reality environment is viewed by a user in the vehicle, a distance to the reality environment may vary according to the position of the eye of the user.
When a user, for example, a driver, uses a head-up display apparatus, a distance from an eye of the user to a reality environment may vary according to a height of a viewing angle. For example, the reality environment at a lower region of the viewing angle may be a road in front of a bonnet of the vehicle or directly in front of the vehicle, and the reality environment at a middle region of the viewing angle may be a road further away from the road of the lower region of the viewing angle. The reality environment at an upper region of the viewing angle may be external environments including the sky. That is, a distance to the reality environment may vary according to the viewing angle, and a distance to the reality environment may gradually increase from the lower region to the upper region of the viewing angle.
The head-up display apparatus 100 according to an example embodiment may provide object images having depth information different from each other according to regions of a viewing angle. For example, the head-up display apparatus 100 may provide object images having depth information gradually increasing from a lower region to an upper region of a viewing angle. In this way, the object images and subjects, for example, roads or buildings in the reality environment may be matched to some degree, and thus, a user may more comfortably recognize the object images.
In
Also, the depth generation member 120 may generate different depth information with respect to the first through fourth object images 410, 420, 430, and 440 according to regions of a viewing angle. For example, when the first, second, and fourth object images 410, 420, and 440 are arranged in a vertical direction to the viewing angle, the depth generation member 120 may generate first through third depth information d1, d2, and d3 so that the first through third depth information d1, d2, and d3 are sequentially changed in the vertical direction of the viewing angle. For example, the depth generation member 120 may generate depth information such that a magnitude of the depth information is gradually reduced from the third depth information d3 to the first depth information d1. That is, the depth generation member 120 may generate depth information with respect to plurality of object images so that the depth information is gradually increased from the lower region to the upper region of the viewing angle. In this manner, the object images may be provided to different regions from each other according to the depth information.
The depth generation member 120 may generate depth information with respect to object images to be provided in the horizontal direction of a viewing angle to have equal depth information. In
Also, the depth generation member 120 may change sizes of the object images in the vertical direction of the viewing angle. For example, the depth generation member 120 may control the size of the object image in the horizontal direction of the viewing angle so that the size of the object image is gradually reduced from the lower region to the upper region of the viewing angle. Also, the depth generation member 120 may control the sizes of the object images to be equal in the horizontal direction of the viewing angle. In this manner, the head-up display apparatus 100 may provide an object image by changing a larger size depth information to a smaller size depth information and by changing a smaller size depth information to a larger size depth information. Thus, since this change corresponds to changing a size of a subject according to the perspective in a reality environment, a user may more easily recognize the object images.
The size control of depth information may be realized as one body with the depth generation member 120 that generates depth information or may be separately realized. The size control of depth information described above may also be generated based on an optical characteristic. The depth generation member 120 may control the size of depth information based on an optical characteristic and may change the size inversely proportional to the depth information. However, example embodiments are not limited thereto.
As described above, the depth generation member 120 may generate depth information with respect to object images by using an optical characteristic. The optical characteristic may include at least one of reflection, scattering, refraction, and diffraction of light. According to the optical characteristic, a focal distance of the depth generation member 120 may be changed, and thus, an image forming location of an object image may be changed. Therefore, the depth generation member 120 may generate depth information based on the optical characteristic.
The depth generation member 120c realized as a lenticular lens or a meta-material may be formed as one body with the spatial light modulator 110. When the spatial light modulator 110 outputs a 3D object image, the spatial light modulator 110 may output an object image, depth information of which is sequentially changed in each region.
As described above, the depth generation members 120, 120a, 120c, and 120d may provide object images having different depth information from one another according to a height of a viewing angle since an optical characteristic of the viewing angle is changed from a lower region to an upper region. According to an example embodiment, the depth generation member 120, 120a, 120c, and 120d may have the same optical characteristic in a direction corresponding to a vertical direction of the viewing angle. Thus, object images having the same depth information may be provided in the same horizontal direction of the viewing angle.
In
The spatial light modulator 110 is may be relatively small, and object images outputted from the spatial light modulator 110 and a plurality of object images having depth information generated from the depth generation member 120 may also be relatively small. The head-up display apparatus 100a according to an example embodiment may further include the magnifying member 140 arranged between the depth generation member 120 and the image converging member 130, and configured to magnify the object images having depth information. The magnifying member 140 may control the magnifying rate of each of the object images in a direction corresponding to a vertical direction of a viewing angle.
Also, as depicted in
The head-up display apparatus described above may be an element of a wearable apparatus. As an example, the head-up display apparatus may be applied to a head mounted display (HMD). Also, the head-up display apparatus may be applied to a glasses-type display or a goggle-type display. Wearable devices may be operated via smart phones by being interlocked with or connected thereto.
A head-up display apparatus according to an example embodiment may generate, by using an optical characteristic, depth information with respect to a plurality of object images simultaneously outputted from a spatial light modulator. Also, the head-up display apparatus according to an example embodiment may provide an image to a user that may be more comfortably viewed by matching an object in a reality environment with object images therein.
Additionally, the head-up display apparatuses according to an example embodiment may be applied to various electronic devices, and also, may be applied to an automotive apparatus, such as a vehicle or general equipment. Also, the head-up display apparatuses according to an example embodiment may be used in various fields. Also, the head-up display apparatus according to an example embodiment may be used to realize an augmented reality (AR) or a mixed reality (MR), and also, may be applied to other fields. In other words, the head-up display apparatus according to an example embodiment may be applied to a multi-object image display that simultaneously displays a plurality of object images, although the multi-object image display is not an AR display or MR display.
While the example embodiments have been shown and described, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims, and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2017-0094972 | Jul 2017 | KR | national |