The present disclosure relates to a camera, a head-up display system, and a movable object.
A known technique is described in, for example, Patent Literature 1.
Patent Literature 1: WO 2018/142610
A camera according to an aspect of the present disclosure includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of a head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display.
A head-up display system according to an aspect of the present disclosure includes a camera and a head-up display. The camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of the head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display. The head-up display includes a display panel, an optical element, an optical system, a receiver, and a processor. The display panel displays a display image. The optical element defines a traveling direction of image light emitted from the display panel. The optical system directs the image light traveling in the traveling direction defined by the optical element toward the eye of the user and projects a virtual image of the display image in a field of view of the user. The receiver receives the second image output from the output unit. The processor causes the display panel to display the display image including a first display image and a second display image having parallax between the first display image and the second display image. The processor changes, based on a position of the eye of the user detected using the second image received from the camera, an area on the display panel on which the first display image is displayed and an area on the display panel on which the second display image is displayed.
A movable object according to an aspect of the present disclosure includes a head-up display system. The head-up display system includes a camera and a head-up display. The camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of the head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display. The head-up display includes a display panel, an optical element, an optical system, a receiver, and a processor. The display panel displays a display image. The optical element defines a traveling direction of image light emitted from the display panel. The optical system directs the image light traveling in the traveling direction defined by the optical element toward the eye of the user and projects a virtual image of the display image in a field of view of the user. The receiver receives the second image output from the output unit. The processor causes the display panel to display the display image including a first display image and a second display image having parallax between the first display image and the second display image. The processor changes, based on a position of the eye of the user detected using the second image received from the camera, an area on the display panel on which the first display image is displayed and an area on the display panel on which the second display image is displayed.
The objects, features, and advantages of the present disclosure will become more apparent from the following detailed description and the drawings.
As a head-up display (HUD) system with the structure that forms the basis of a HUD system according to one or more embodiments of the present disclosure, a known HUD system delivers images having parallax between them to the left and right eyes of a user, and projects a virtual image in the user's field of view to be viewed as a three-dimensional (3D) image with depth.
To allow the user to view appropriate images with the left and right eyes, the HUD system for displaying 3D images includes a camera for detecting the positions of the user's eyes. The camera is to track the varying positions of the user's eyes in real time. The camera may also serve as a driver monitoring camera for monitoring the movement of the driver's head, eyelids, or both.
The camera used for the HUD system is to detect the eye positions with high detection performance
One or more embodiments of the present disclosure will now be described with reference to the drawings. The drawings used herein are schematic and are not drawn to scale relative to the actual size of each component.
As shown in
The HUD system 100 causes image light emitted from a display device 23 in the HUD 20 to travel toward eyes 32 of a user 31, as described in detail below. The HUD system 100 thus displays a virtual image in the field of view of the user 31. The HUD system 100 has an area within which the virtual image displayable by the HUD 20 is viewable by the eyes of the user 31. The area within which the virtual image displayable by the HUD 20 is viewable by the eyes of the user 31 is referred to as an eyebox area 33. With the eyes 32 outside the eyebox area 33, the user 31 cannot view the virtual image displayable by the HUD 20.
In
The HUD system 100 includes the camera 10 to detect the positions of the eyes 32 of the user 31 observing a 3D image. The eyes 32 of the user 31 include the left eye 32l (first eye) and the right eye 32r (second eye) of the user 31. The left eye 32l and the right eye 32r of the user 31 may herein be collectively referred to as the eyes 32 without being distinguished from each other. The camera 10 outputs an image including the eyes 32 of the user 31 to the HUD 20. For the HUD system 100 mounted on a vehicle as the movable object 30, the user 31 may be a driver of the movable object 30.
For the HUD system 100 mounted on a vehicle as the movable object 30, the camera 10 may be attached to a room mirror. The camera 10 may be attached to, for example, an instrument cluster. The camera 10 may be attached to a center panel. The camera 10 may be attached to a support of the steering wheel at the center of the steering wheel. The camera 10 may be attached to a dashboard.
As shown in
The lens 11 forms a subject image including the eyes 32 of the user 31 of the HUD 20 on the light-receiving surface of the image sensor 12. The lens 11 may include one or more lenses.
The image sensor 12 photoelectrically converts, in units of pixels, the subject image formed on the light-receiving surface and obtains signals for the respective pixels to obtain a first image. The image sensor 12 can output the first image as electrical signals. The image sensor 12 may include, for example, a charge-coupled device (CCD) image sensor or a complementary metal-oxide semiconductor (CMOS) image sensor. The first image has a resolution that allows detection of the pupil positions of the eyes 32 of the user 31.
The analog signal processor 13 includes an analog signal processing circuit. The analog signal processor 13 can perform various types of analog signal processing on analog signals representing the first image obtained by the image sensor 12. The analog signal processing may include correlated double sampling and amplification.
The A/D converter 14 includes an analog-digital converter circuit for converting analog signals resulting from the analog signal processing with the analog signal processor 13 into digital signals. The digital first image output from the A/D converter 14 may be stored in a memory incorporated in the camera 10. The analog signal processor 13 and the A/D converter 14 may be incorporated in the image sensor 12.
The digital signal processor 15, or a DSP, includes a processor for digital signal processing. The digital signal processor 15 prestores information about an area expressed using two-dimensional coordinates captured by the camera 10 resulting from conversion of the eyebox area 33 expressed in a 3D space. The converted eyebox area 33 in the image captured by the camera 10 is referred to as an area including the eyebox area 33. The digital signal processor 15 may include a memory for storing the area including the eyebox area 33.
The digital signal processor 15 obtains the first image as digital signals resulting from the conversion by the A/D converter 14 (step S01).
In the first stage of processing, the digital signal processor 15 reduces the resolution of the area other than the predefined eyebox area 33 (step S02). The resolution may be reduced by partially reducing information about the pixels of the digital first image in accordance with a predetermined rule. The digital signal processor 15 does not change the resolution of the area of the first image including the eyebox area 33.
In the second stage of processing, the digital signal processor 15 generates a second image resulting from any of various types of image processing on the image having the reduced resolution of the area other than the eyebox area 33 (step S03). The image processing may include color interpolation, color correction, brightness correction, and noise reduction.
The digital signal processor 15 outputs the second image to the output unit 16 (step SO4).
The digital signal processor 15 reduces the image resolution of the area of the first image other than the eyebox area 33 in the first stage of processing in step S02. This allows the processing to be performed at higher speed in the second stage in step S03. The digital signal processor 15 reduces the image resolution of the area other than the eyebox area 33 to produce the second image with a reduced entire volume of information.
The output unit 16 is an output interface of the camera 10 and may include a communication circuit such as a large-scale integration (LSI) circuit for wired or wireless communication, a connector, and an antenna. The output unit 16 outputs, to the HUD 20, the image signals of the second image resulting from the image processing with the digital signal processor 15. The output unit 16 processes the image signals to be output to the HUD 20 with a protocol for communication with the HUD 20. The output unit 16 may output, to the HUD 20, the image signals in a wired or wireless manner or through a communication network such as a controller area network (CAN).
The camera controller 17 can control the image sensor 12, the analog signal processor 13, the A/D converter 14, the digital signal processor 15, and the output unit 16. The camera controller 17 includes one or more processors.
As shown in
The HUD 20 in an embodiment includes a reflector 21, an optical member 22, and a display device 23. The reflector 21 and the optical member 22 form an optical system in the HUD 20. The optical system in the HUD 20 may include optical elements such as a lens and a mirror in addition to the reflector 21 and the optical member 22. In another embodiment, the optical system in the HUD 20 may include lenses without a reflector 21.
The reflector 21 reflects image light emitted from the display device 23 toward a predetermined area on the optical member 22. The predetermined area reflects image light toward the eyes 32 of the user 31. The predetermined area may be defined by the direction in which the eyes 32 of the user 31 are located relative to the optical member 22 and the direction in which image light is incident on the optical member 22. The reflector 21 may be a concave mirror. The optical system including the reflector 21 may have a positive refractive index.
The optical member 22 reflects image light emitted from the display device 23 and reflected by the reflector 21 toward the left eye 32l and the right eye 32r of the user 31. For example, the movable object 30 may include a windshield as the optical member 22. The optical member 22 may include a plate-like combiner for head-up display inside the windshield. The HUD 20 thus directs light emitted from the display device 23 to the left eye 32l and the right eye 32r of the user 31 along an optical path P. The user 31 views light reaching the eyes along the optical path P as a virtual image.
As shown in
The input unit 24 receives the second image 40 from the camera 10. The input unit 24 can communicate with the camera 10 in accordance with a predetermined communication scheme. The input unit 24 includes an interface for wired or wireless communication. The input unit 24 may include a communication circuit. The communication circuit may include a communication LSI circuit. The input unit 24 may include a connector for wired communication, such as an electrical connector or an optical connector. The input unit 24 may include an antenna for wireless communication.
The illuminator 25 illuminates the display panel 26 with planar illumination light. The illuminator 25 may include a light source, a light guide plate, a diffuser plate, and a diffuser sheet. The illuminator 25 emits, from the light source, illumination light that then spreads uniformly for illuminating the surface of the display panel 26 through, for example, the light guide plate, the diffuser plate, or the diffuser sheet. The illuminator 25 may emit the uniform light toward the display panel 26.
The display panel 26 may be, for example, a transmissive liquid crystal display panel. The display panel 26 is not limited to a transmissive liquid crystal panel but may be another display panel such as an organic electroluminescent (EL) display. For the display panel 26 being a self-luminous display panel, the display device 23 may not include the illuminator 25.
As shown in
Each divisional area corresponds to a subpixel. Thus, the active area A includes multiple subpixels arranged in a lattice in the horizontal direction and the vertical direction.
Each subpixel has one of the colors red (R), green (G), and blue (B). One pixel may be a set of three subpixels with R, G, and B. For example, multiple subpixels included in one pixel may be arranged in the horizontal direction. Multiple subpixels having the same color may be arranged, for example, in the vertical direction.
The multiple subpixels arranged in the active area A form subpixel groups Pg under control by the controller 28. Multiple subpixel groups Pg are arranged repeatedly in the horizontal direction. Each subpixel group Pg may be aligned with or shifted from the corresponding subpixel group Pg in the vertical direction. For example, the subpixel groups Pg are repeatedly arranged in the vertical direction at positions shifted by one subpixel in the horizontal direction from the corresponding subpixel group Pg in adjacent rows. The subpixel groups Pg each include multiple subpixels in predetermined rows and columns More specifically, the subpixel groups Pg each include (2×n×b) subpixels P1 to PN (N=2×n×b), which are consecutively arranged in b rows in the vertical direction and in (2×n) columns in the horizontal direction. In the example shown in
Each subpixel group Pg is the smallest unit controllable by the controller 28 to display an image. The subpixels included in each subpixel group Pg are identified using the identification information P1 to PN (N=2×n×b). The subpixels P1 to PN (N=2×n×b) included in each subpixel group Pg with the same identification information are controlled by the controller 28 at the same time. For example, the controller 28 can switch the image to be displayed by the multiple subpixels P1 from the left eye image to the right eye image at the same time in all the subpixel groups Pg.
As shown in
The parallax barrier 27 defines the traveling direction of image light emitted from the subpixels for each of multiple transmissive portions 271. As shown in
The transmissive portions 271 and the light-reducing portions 272 extend in a predetermined direction along the active area A. The transmissive portions 271 and the light-reducing portions 272 are arranged alternately in a direction orthogonal to the predetermined direction. For example, the predetermined direction is along a diagonal of the subpixels when the display panel 26 and the parallax barrier 27 are viewed in the depth direction (z-direction). For example, the predetermined direction may be the direction that crosses t subpixels in y-direction while crossing s subpixels in x-direction (s and t are relatively prime positive integers) when the display panel 26 and the parallax barrier 27 are viewed in the depth direction (z-direction). The predetermined direction may be y-direction. The predetermined direction corresponds to the direction in which the subpixel groups Pg are arranged. In the example shown in
The parallax barrier 27 may be formed from a film or a plate. In this case, the light-reducing portions 272 are parts of the film or plate. The transmissive portions 271 may be slits in the film or plate. The film may be formed from resin or another material. The plate may be formed from resin, metal, or another material. The parallax barrier 27 may be formed from a material other than a film or a plate. The parallax barrier 27 may include a base formed from a light-reducing material or a material containing an additive with light-reducing properties.
The parallax barrier 27 may include a liquid crystal shutter. The liquid crystal shutter can control the light transmittance in accordance with a voltage applied. The liquid crystal shutter may include multiple pixels and control the light transmittance for each pixel. The transmissive portions 271 and the light-reducing portions 272 are defined by the liquid crystal shutter and at positions corresponding to the pixels of the liquid crystal shutter. For the parallax barrier 27 including the liquid crystal shutter, the boundaries between the transmissive portions 271 and the light-reducing portions 272 may be staggered along the shapes of the pixels.
Image light emitted from the active area A on the display panel 26 partially transmits through the transmissive portions 271 and is reflected by the reflector 21 to reach the optical member 22. The image light is reflected by the optical member 22 and reaches the eyes 32 of the user 31. This allows the eyes 32 of the user 31 to view a first virtual image V1 frontward from the optical member 22. The first virtual image V1 is a virtual image of the image displayed on the active area A. The plane on which the first virtual image V1 is projected is referred to as a virtual image plane Sv. Being frontward herein refers to the direction in which the optical member 22 is located as viewed from the user 31. Being frontward is typically the direction of movement of the movable object 30. As shown in
The user 31 thus views the image appearing as the first virtual image V1 through the second virtual image V2. In reality, the user 31 does not view the second virtual image V2, or a virtual image of the parallax barrier 27. However, the second virtual image V2 is hereafter referred to as appearing at the position at which the virtual image of the parallax barrier 27 is formed and as defining the traveling direction of image light from the first virtual image V1. Areas in the first virtual image V1 viewable by the user 31 with image light reaching the positions of the eyes 32 of the user 31 are hereafter referred to as viewable areas Va. Areas in the first virtual image V1 viewable by the user 31 with image light reaching the position of the left eye 32l of the user 31 are referred to as left viewable areas VaL. Areas in the first virtual image V1 viewable by the user 31 with image light reaching the position of the right eye 32r of the user 31 are referred to as right viewable areas VaR.
As shown in
E:Vd=(n×VHp):Vg (1)
Vd:VBp=(Vdv+Vg):(2×n×VHp) (2)
The virtual image barrier pitch VBp is the interval at which the light-reducing portions 272 projected as the second virtual image V2 are arranged in a direction corresponding to u-direction. The virtual image gap Vg is the distance between the second virtual image V2 and the first virtual image V1. The optimum viewing distance Vd is the distance between the virtual image V2 of the parallax barrier 27 and the position of the left eye 32l or the right eye 32r of the user 31 indicated by positional information obtained from the camera 10. An interocular distance E is the distance between the left eye 32l and the right eye 32r. The interocular distance E may be, for example, 61.1 to 64.4 mm, as calculated through studies conducted by the National Institute of Advanced Industrial Science and Technology. VHp is the horizontal length of each subpixel of the virtual image. VHp is the length of each subpixel of the first virtual image V1 in a direction corresponding to the first direction.
As described above, the left viewable areas VaL in
In the example shown in
The controller 28 may be connected to each of the components of the HUD system 100 to control these components. The components controlled by the controller 28 include the camera 10 and the display panel 26. The controller 28 may be, for example, a processor. The controller 28 may include one or more processors. The processors may include a general-purpose processor that reads a specific program and performs a specific function, and a processor dedicated to specific processing. The dedicated processor may include an application-specific integrated circuit (ASIC). The processor may include a programmable logic device (PLD). The PLD may include a field-programmable gate array (FPGA). The controller 28 may be either a system on a chip (SoC) or a system in a package (SiP) in which one or more processors cooperate with other components. The controller 28 may include a storage to store various items of information or programs to operate each component of the HUD system 100. The storage may be, for example, a semiconductor memory. The storage may serve as a work memory for the controller 28.
The memory 29 may include any storage device such as a random-access memory (RAM) or a read-only memory (ROM). The memory 29 stores information received by the input unit 24, information resulting from conversion by the controller 28, and other information.
The controller 28 can detect the positions of the left eye 32l and the right eye 32r of the user 31 based on the second image 40 obtained from the camera 10 through the input unit 24. As described below, the controller 28 causes the display panel 26 to display the right eye image and the left eye image having parallax between them based on information about the positions of the left eye 32l and the right eye 32r. The controller 28 switches the image to be displayed by the subpixels on the display panel 26 between the right eye image and the left eye image.
As described above, the left viewable areas VaL of the first virtual image V1 viewable by the left eye 32l of the user 31 may be located at the positions shown in
A change in the positions of the eyes 32 of the user 31 changes the parts of the subpixels P1 to P12 used to display the virtual image viewable by the left eye 32l and the right eye 32r of the user 31. The controller 28 determines the subpixels to display the left eye image and the subpixels to display the right eye image among the subpixels P1 to P12 in each subpixel group Pg in accordance with the positions of the eyes 32 of the user 31. For the subpixels determined to display the left eye image, the controller 28 causes these subpixels to display the left eye image. For the subpixels determined to display the right eye image, the controller 28 causes these subpixels to display the right eye image.
For example, the eyes 32 of the user 31 observing the first virtual image V1 as shown in
The controller 28 controls the display panel 26 to allow the left eye image and the right eye image to be projected as a 3D image in the field of view of the user 31. The controller 28 causes the controller 28 to display an image of a target 3D object included in the left eye image and the right eye image with intended parallax between these images. The controller 28 may cause the display panel 26 to display images with parallax between them prestored in the memory 29. The controller 28 may calculate the parallax based on the distance to the 3D object to be displayed in the 3D image in real time, and use the parallax to generate the left eye image and the right eye image to be displayed by the display panel 26.
The eyebox area 33 shown in
To detect the positions of the eyes 32 of the user 31, the controller 28 in the display device 23 detects the pupil positions of the eyes 32 of the user 31 in the high-resolution area 41 including the eyebox area 33. To switch images in units of subpixels based on the pupil positions of the user 31 as described above, the HUD 20 uses a high-resolution image that allows detection of the pupil positions.
The camera 10 mounted on the vehicle may output image information to devices other than the HUD 20. The devices other than the HUD 20 include a driver monitor for monitoring the driver's driving condition. For example, the driver monitor can monitor the movement of the driver's head or eyelids and alert the driver with sound or vibration upon detection of a potentially dangerous driving condition such as drowsiness of the user 31. To determine drowsiness of the user 31, the driver monitor uses an image of the eyebox area 33 and also uses an image of the area other than the eyebox area 33. Images used by the driver monitor may not have a resolution as high as to allow detection of the pupils of the eyes 32 of the user 31.
In the present embodiment, the second image 40 includes the low-resolution area 42 as the area other than the eyebox area 33. Image information with a reduced volume can thus be transmitted from the output unit 16 in the camera 10 to the input unit 24 in the display device 23. This structure can reduce the delay or latency in communicating image information. The HUD system 100 can thus reduce the delay in processing while accurately detecting the eye positions with the camera 10. The positions of the eyes 32 can thus be detected with higher detection performance
In the present embodiment, the digital signal processor 15 in the camera 10 reduces the resolution of the area of the first image other than the eyebox area 33 in the first stage before performing any of various types of image processing in the second stage. This allows faster image processing in the second stage including color interpolation, color correction, brightness correction, and noise reduction.
In the present embodiment, the same camera 10 may be used for the HUD 20 and a device other than the HUD 20. For the HUD 20, the camera 10 detects the eyes 32 of the user 31 using a limited part of the entire image, or the high-resolution area 41. For a device other than the HUD 20 that does not use a high-resolution image of the user 31, the camera 10 uses the entire second image 40 including the low-resolution area 42 with a lower resolution. The entire system including the HUD system 100 thus has higher performance with less delay in processing.
In the above embodiments, the eyebox area 33 is a fixed predefined area. In this case, the eyebox area 33 is to be defined as a large area in which the eyes 32 can be located to accommodate different sitting heights and driving postures of the user 31. However, the setting of the eyebox area 33 may be limited by, for example, the layout of the optical system in the HUD 20. In this case, the HUD 20 may include an adjuster for adjusting the position of the eyebox area 33 in accordance with the positions of the eyes 32 of the user 31.
The HUD system 110 includes a reflector 21 with its position or orientation or both adjustable in accordance with the positions of the eyes 32 of the user 31. The HUD 20 includes an adjuster 51 for adjusting either or both the position and the orientation of the reflector 21. The adjuster 51 includes a drive. The drive may be a stepper motor. The adjuster 51 may be driven by a manual operation performed by the user 31. The adjuster 51 may be controlled by the controller 28 based on the positions of the eyes 32 obtained by the display device 23 from the camera 10. The HUD 20 may include a transmitter 52 for outputting information about the position of the eyebox area 33 to the camera 10. The information about the position of the eyebox area 33 is, for example, information about the reflector 21 being driven by the adjuster 51. The transmitter 52 includes a communication interface for transmitting information to the camera 10 in a wired or wireless manner.
As shown in
The system with the above structure can define the eyebox area 33 at an appropriate position independently of the sitting height and the driving posture of the user 31. The digital signal processor 15 can thus generate the second image 40 including the high-resolution area 41 including the defined eyebox area 33 and the low-resolution area 42 other than the eyebox area 33. The HUD system 110 can thus reduce the delay in processing while accurately detecting the eye positions with the camera 10, as in the embodiment shown in
The movable object according to one or more embodiments of the present disclosure includes a vehicle, a vessel, or an aircraft. The vehicle according to one or more embodiments of the present disclosure includes, but is not limited to, an automobile or an industrial vehicle, and may also include a railroad vehicle, a community vehicle, or a fixed-wing aircraft traveling on a runway. The automobile includes, but is not limited to, a passenger vehicle, a truck, a bus, a motorcycle, or a trolley bus, and may also include another vehicle traveling on a road. The industrial vehicle includes an agricultural vehicle or a construction vehicle. The industrial vehicle includes, but is not limited to, a forklift or a golf cart. The agricultural vehicle includes, but is not limited to, a tractor, a cultivator, a transplanter, a binder, a combine, or a lawn mower. The construction vehicle includes, but is not limited to, a bulldozer, a scraper, a power shovel, a crane vehicle, a dump truck, or a road roller. The vehicle includes a man-powered vehicle. The classification of the vehicle is not limited to the above. For example, the automobile may include an industrial vehicle traveling on a road, and one type of vehicle may fall within a plurality of classes. The vessel according to one or more embodiments of the present disclosure includes a jet ski, a boat, or a tanker. The aircraft according to one or more embodiments of the present disclosure includes a fixed-wing aircraft or a rotary-wing aircraft.
Although embodiments of the present disclosure have been described with reference to the drawings and examples, those skilled in the art can easily make various modifications or alterations based on one or more embodiments of the present disclosure. Such modifications or alterations also fall within the scope of the present disclosure. For example, the functions of the components or steps are reconfigurable unless any contradiction arises. A plurality of components or steps may be combined into a single unit or step, or a single component or step may be divided into separate units or steps. The embodiments of the present disclosure can also be implemented as a method or a program implementable by a processor included in the device, or as a storage medium storing the program. These method, program, and storage medium also fall within the scope of the present disclosure.
In the present disclosure, the first, the second, or others are identifiers for distinguishing the components. The identifiers of the components distinguished with the first, the second, and others in the present disclosure are interchangeable. For example, the first lens can be interchangeable with the second lens. The identifiers are to be interchanged together. The components for which the identifiers are interchanged are also to be distinguished from one another. The identifiers may be eliminated. The components without such identifiers can be distinguished with reference numerals. The identifiers such as the first and the second in the present disclosure alone should not be used to determine the orders of the components or to determine the existence of smaller or larger number identifiers.
In the present disclosure, x-direction, y-direction, and z-direction are used for ease of explanation and may be interchangeable with one another. The orthogonal coordinate system including axes in x-direction, y-direction, and z-direction is used to describe the structures according to the present disclosure. The positional relationship between the components in the present disclosure is not limited to the orthogonal relationship. The same applies to u-direction, v-direction, and w-direction.
In the above embodiments, the optical element that defines the traveling direction of image light is a parallax barrier. However, the optical element is not limited to a parallax barrier. The optical element may be a lenticular lens.
In the above embodiments, the positions of the eyes 32 of the user 31 are detected by the controller 28 in the display device 23 based on the second image 40. The positions of the eyes 32 of the user 31 may be detected by the camera 10 based on the second image 40 generated by the digital signal processor 15. For example, the camera 10 may define a predetermined position as the origin and detect the direction and amount of displacement of the positions of the eyes 32 from the origin. The camera 10 may detect the positions of the eyes 32 of the user 31 and transmit, from the output unit 16, information indicating the positions of the eyes 32 to the input unit 24 in the display device 23. The controller 28 can control the display panel 26 in accordance with the obtained information indicating the positions of the eyes 32.
The present disclosure may be implemented in the following forms.
A camera according to one or more embodiments of the present disclosure includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of a head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display.
A head-up display system according to one or more embodiments of the present disclosure includes a camera and a head-up display. The camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of the head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display. The head-up display includes a display panel, an optical element, an optical system, a receiver, and a processor. The display panel displays a display image. The optical element defines a traveling direction of image light emitted from the display panel. The optical system directs the image light traveling in the traveling direction defined by the optical element toward the eye of the user and projects a virtual image of the display image in a field of view of the user. The receiver receives the second image output from the output unit. The processor causes the display panel to display the display image including a first display image and a second display image having parallax between the first display image and the second display image. The processor changes, based on a position of the eye of the user detected using the second image received from the camera, an area on the display panel on which the first display image is displayed and an area on the display panel on which the second display image is displayed.
A movable object according to one or more embodiments of the present disclosure includes a head-up display system. The head-up display system includes a camera and a head-up display. The camera includes an image sensor, a signal processor, and an output unit. The image sensor obtains a first image being a subject image including an eye of a user of the head-up display. The signal processor generates a second image by reducing a resolution of an area of the first image other than an eyebox area. The eyebox area is an area within which a virtual image displayable by the head-up display is viewable by the eye of the user. The output unit outputs the second image to the head-up display. The head-up display includes a display panel, an optical element, an optical system, a receiver, and a processor. The display panel displays a display image. The optical element defines a traveling direction of image light emitted from the display panel. The optical system directs the image light traveling in the traveling direction defined by the optical element toward the eye of the user and projects a virtual image of the display image in a field of view of the user. The receiver receives the second image output from the output unit. The processor causes the display panel to display the display image including a first display image and a second display image having parallax between the first display image and the second display image. The processor changes, based on a position of the eye of the user detected using the second image received from the camera, an area on the display panel on which the first display image is displayed and an area on the display panel on which the second display image is displayed.
The camera, the HUD system, and the movable object including the HUD system according to one or more embodiments of the present disclosure can detect the eye positions with higher detection performance.
The present disclosure may be embodied in various forms without departing from the spirit or the main features of the present disclosure. The embodiments described above are thus merely illustrative in all respects. The scope of the present disclosure is defined not by the description given above but by the claims. Any modifications and alterations contained in the claims fall within the scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
2019-179694 | Sep 2019 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/034565 | 9/11/2020 | WO |