The invention relates to a display system and a method, and more particularly, to a display system and a method for displaying images which vary with a viewer's sightline.
Human's field of view has a limited range of visual field (including a horizontal visual angle and a vertical range visual angle). To expand visual fields, we have to constantly change viewing angles as well as viewing directions. For example, assuming a vehicle is parked in front of a viewer in the real world, from the place where the viewer stands, he/she may only see the front side of the car because of the limited scope of the field of view. However, when the viewer moves the position to the right where he/she can view the same vehicle from the right (to the left), the viewer can therefore see a partial front side and a partial lateral side of the vehicle car. That is, by changing the viewing angle and direction the field of view can be expanded indefinitely in the real world.
Nonetheless, the situation would be different when it comes to images displayed on a display device. Given the limited size of display devices, images can only be presented in conforming with the size of a display device. Consequently, the information can be displayed is also restricted.
Besides, conventional display adopts a perspective transform to compress a 3D object into a 2D format. However, images presented on conventional screens are static. That is, an image remains unchanged no matter where the viewer is. The viewing experience is different to that in the real world.
According to one aspect of the present disclosure, a display system is provided. The display system includes an image capturing module, a processing unit, and a display device. The image capturing module is configured to capture a facial image of a viewer. The processing unit is coupled to the image capturing module and configured to perform the following instructions. A facial feature is identified based on the facial image, and a left eye position and a right eye position are computed. A left eye viewing vector and a right eye viewing vector are computed based on the left eye position and the right eye position, respectively. A left eye view is generated based on the left eye viewing vector. A right eye view is generated based on the right eye viewing vector. An image fusion processing is performed on the left eye view and the right eye view to render a fused image. The display device is coupled to the processing unit and configured to display the fused image.
According to another aspect of the present disclosure, another display system is provided. The display system includes an image capturing module, a processing unit, and a display device. The image capturing module is configured to capture a first facial image of a viewer at a first time and a second facial image of the viewer at a second time. The processing unit is coupled to the image capturing module and configured to perform the following instructions. A first facial feature is identified based on the first facial image, and a first left eye position and a first right eye position are computed. A first left eye viewing vector and a first right eye viewing vector are computed based on the first left eye position and the first right eye position, respectively. A first left eye view is generated based on the first left eye viewing vector. A first right eye view is generated based on the first right eye viewing vector. An image fusion processing is performed on the first left eye view and the first right eye view to render a first fused image. A second facial feature is identified based on the second facial image, and a second left eye position and a second right eye position are computed. A second left eye viewing vector and a second right eye viewing vector are computed based on the second left eye position and the second right eye position, respectively. A second left eye view is generated based on the second left eye viewing vector. A second right eye view is generated based on the second right eye viewing vector. An image fusion processing is performed on the second left eye view and the second right eye view to render a second fused image. The display device is coupled to the processing unit and configured to display the first fused image at the first time and display the second fused image at the second time.
According to a yet another aspect of the present disclosure, a method for displaying images is provided. The method includes the following instructions. A facial image of a viewer is captured at a first time. A facial feature is identified based on the facial image and a left eye position and a right eye position are computed. A left eye viewing vector and a right eye viewing vector are computed based on the left eye position and the right eye position, respectively. A left eye view is generated based on the left eye viewing vector. A right eye view is generated based on the right eye viewing vector. An image fusion processing is performed on the left eye view and the right eye view to render a first fused image. The first fused image is displayed at the first time.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
In the present disclosure, a display system and a method for displaying images on a display system are provided to generate a displayed image according to a sightline of a viewer. Via the display system, an appearance of the object presented to the viewer may vary with the sightline of the viewer as if the object was observed in the real world, which gives the viewer a more realistic user experience. In addition, various displayed images may be provided according to various sightlines of the viewer so as to expand the field of view of the viewer.
The display device 32 is disposed inside the cabin 20. The display device is configured to display a fused image. The display device 32 may be, but not limited to, a digital vehicle instrument cluster, a central console panel, or a head-up display.
The processing unit 33 is coupled to the image capturing module 31 and the display device 32. The processing unit 33 may be an intelligent hardware device, such as a central processing unit (CPU), a microcontroller, or an ASIC. The processing unit 33 may process data and instructions. In this embodiment, the processing unit 33 is an automotive electronic control unit (ECU). The processing unit 33 is configured to identify a facial feature based on the facial image captured by the image capturing module 31, generate a left eye view and a right eye view, and perform image fusion processing on the left eye view and the right eye view to render a fused image.
As previously mentioned, conventional display devices present images dully. An image displayed on a conventional displayer will not change in any viewing direction. From viewers' perspective, the field of view with respect to the common displayer is constant. On the other hand, the fused image provided in accordance with the instant disclosure may change with different viewpoints of a viewer. Therefore, a field of view of the viewer may be expanded even though the display area is fixed.
Firstly, a facial image of the viewer is captured by the image capturing module 31.
Based on the facial image 5, a facial feature 50 is identified by the processing unit 33. That includes computations of a left eye position and a right eye position. The facial feature 50 may be identified via computations of image recognition and image processing familiar by skilled persons. Alternatively, the processing unit 33 may establish a facial model 51 before identifying the facial feature 50.
According to the identified facial feature, a coordinate system is established by the processing unit 33, where an origin of the coordinate system may be set at any point. The coordinate system is referenced when it comes to relative positions of, for instance, without limitation, the viewer 9, the object 4, the image capturing module 31, the display device 32, etc. In one instance, it may be set in light of the virtual space 49. For example, the origin may be set at a point (e.g., a center of mass or a center of volume) of the displayed object 4, or the center of the virtual space 49. In this implementation, the origin of the coordinate system is set at the center of the object.
The position of the viewer is obtained and recorded with reference to the coordinate system. The processing unit 33 obtains the position (e.g., a head position or an eye position) of the viewer using 3D sensing technologies. For instance, the image capturing module 31 is a stereo camera (with two or more lens) used for obtaining the position of the viewer. In some other implementations, the image capturing module 31 includes a depth sensor used for obtaining the position of the viewer.
A left eye position vector E1 and a right eye position vector E2 are calculated. The left eye position vector E1 from the left eye position 501 to the image capturing module 31 is computed based on the position of the viewer and the left eye position 501. The right eye position vector E2 from the left eye position 502 to the image capturing module 31 is computed based on the position of the viewer and the left eye position 502.
Next, the sightline of the viewer to the display device 32 is determined. The sightline (including a gaze direction and a gaze angle) of the viewer may be represented by a left eye viewing vector 401 and a right eye viewing vector 402. Based on the position vector P, the left eye position vector E1 and the right eye position vector E2, the processing unit 33 computes the left eye viewing vector 401 from the left eye position 501 to the object 4 and the right eye viewing vector 402 from the right eye position 502 to the object 4. In this embodiment, the first left eye position 501 and the first right eye position 502 of the viewer is utilized to determine the sightline of the viewer.
In some embodiments, the head position 500 identified based on the facial features 50 of the viewer is used to determine the sightline of the viewer. In yet another embodiment, the head pose identified based on the facial features 50 is used to determine the sightline of the viewer. In some embodiments, the eye gesture identified based on the facial features 50 is used to determine the sightline of the viewer. In some other embodiments, other facial features are used to determine the sightline of the viewer.
After the sightline of the viewer (i.e., the left eye viewing vector 401 and the right eye viewing vector 402) is determined, a left eye view and a right eye view are generated.
In the real world, the vision of the left eye of the human may not be exactly identical to the vision of the right eye of the human. Specifically, when an object is being observed, the left eye captures more information about a left side of the object, while the right eye captures more information about a right side of the object. In the present disclosure, all graphic information of the object observed by the left eye and the right eye will be preserved. To provide the viewer with a more realistic visual effect, the display system 3 of the present disclosure generates two images each containing the graphic information corresponding to the left eye and the right eye according to the left eye position and the right eye position, respectively, and then perform image fusion processing on integrate all the graphic information into one fused image. In contrast to conventional display system that provides an image corresponding only to a single sightline, the display system of the present disclosure displays a more realistic image, and therefore improves the visual experience of the viewer.
In action 100, as shown in
In action 110, as shown in
In action 120, as shown in
In action 130, as shown in
In action 140, as shown in
In action 150, the fused image 7 is displayed on the display device 32.
Through the abovementioned actions, the method for displaying images on a display system of the present disclosure may track the direction and the angle of the viewer's sightline based on the positions of the viewer's left eye and right eye and then renders an image of the object according to the viewer's sightline. Moreover, the sightline of the viewer may be tracked according to a head position, a head pose, an eye gesture, or other facial features of the viewer. As mentioned before, when a conventional display device displays an image of an object, the displayed image of the object is static and identical to the viewer at any viewpoint. In contrast, the display system of the present disclosure renders the displayed image according to the direction and the angle of the viewer's sightline so that the displayed object may be presented as if the object is observed in the real world. For example, if the sightline of the viewer shifts to view the object from a left top side to the right bottom side, a left top side of the object is displayed by the display device 32 as if the object is observed from a left top position.
Furthermore, in the present disclosure, a left eye view and the right eye view are generated based on the position of the viewer and then the image fusion processing is performed on the left eye view and the right eye view to render a fused image. Therefore, by implementation of the parallax between the left and right eyes, the displayed object 4 looks more realistic.
Besides, a range of vision may be extended. As mentioned above, the left eye captures graphic information that is outside the field of view of the right eye, and vice versa. in the present disclosure, all graphic information including the left graphic information and the right graphic information are preserved so that all the graphic information may be presented to the viewer according to the direction and the angle of the viewer's sightline. Therefore, the displayed image may vary with the viewer's sightline, and more contents of the object may be displayed though the position and the size of the display device 32 are fixed and limited, and thus the range of vision may be extended.
In another embodiment, a display system and method for displaying images are provided for displaying various graphic information or images corresponding to various viewpoints of the viewer so as to expand the field of view. The display system and the method are described as follows with reference to
When the viewer shifts from the first position 1000 to the second position 2000 at the second time, a second facial image 6 of the viewer at the second position 2000 is captured by the image capturing module 31. Similarly, the second facial image 6 includes a left eye 91 and a right eye 92. Next, a second facial feature 60 is identified by the processing unit 33 based on the second facial image 6. In one embodiment, the second facial features 60 includes a second left eye position 601 and a second right eye position 602. In one implementation, the processing unit 33 may establish a second facial model 61 before identifying the second facial feature. In another embodiment, the second facial feature 60 further includes the head position 600. In yet another embodiment, the facial feature 60 further includes a head pose. In some embodiments, the facial feature 60 further includes an eye gesture.
According to the identified facial feature, a second left eye position 601 and a second right eye position 602 are computed.
After the second left eye viewing vector 405 and the second right eye viewing vector 406 are computed, a second left eye view and a second right eye view are generated.
Based on the above, no matter where the viewer is, the display system of the present disclosure utilizes the abovementioned process to generate the left eye view corresponding to the viewer's left eye and the right eye view corresponding to the viewer's right eye and then perform image fusion processing on the two views to render a displayed image corresponding to the viewer's sightline. In addition, a displayed image observed by the viewer at the first position 1000 is different from a displayed image observed by the viewer at the second position 2000. That is, the display system of the present disclosure displays different parts of an object in response to the viewer's sightline, which could be related to the real-life experience that a viewer changes the location to observe an object thoroughly. For example, when the viewer at the first position 1000 in front of and facing towards the display device observes an object displayed by the display device, the viewer sees a front side of the object. When the viewer shifts the sightline left (that is, viewing the display device from the right), the viewer observes more information on the right side of the object. When the viewer shifts the sightline right (that is, viewing the display device from the left), the viewer observes more information on the left side of the object.
In some other embodiments, various display information could be selectively displayed on the display device corresponding to the viewer's sightline.
Afterward, at a second time, when the driver shifts the sightline to the left (e.g., the driver moves his/her head to the right and looks towards the left), the displayed image is changed, for example, a temperature information is displayed on the left section of the digital vehicle instrument cluster, as shown in
As such, the method for displaying images on a display system according to different sightlines of the viewer is provided.
In action 200, a first facial image 5 of the viewer is captured by an image capturing module 31 at the first time when the viewer is at the first position 1000.
In action 210, a first facial feature 50 is identified, by the processing unit 33, based on the first facial image 5 and a first left eye position 501 and a first right eye position 502 are computed.
In action 220, a first left eye viewing vector 401 and a first right eye viewing vector 402 are computed, by the processing unit 33, based on the first left eye position 501 and the first right eye position 502.
In action 230, a first left eye view 41 and a first right eye view 42 are generated, by the processing unit 33, based on the first left eye viewing vector 401 and the first right eye viewing vector 402, respectively; where the first left eye view 41 and the first right eye view 42 overlap with each other to form an first overlapping region 43, the first left eye view 41 includes a first left graphic information of the object 4, and the first right eye view 42 includes a first right graphic information of the object 4.
In action 240, an image fusion processing is performed, by the processing unit 33, on the first left eye view 41 and the first right eye view 42 to render a first fused image 7.
In action 250, the first fused image 7 is displayed on the display device 32 when the viewer is at the first position 1000.
In action 260, a second facial image 6 of the viewer is captured by an image capturing module 31 at the second time when the viewer is at the second position 2000.
In action 270, a second facial feature 60 is identified, by the processing unit 33, based on the second facial image 6, and a second left eye position 601 and the second right eye position 602.
In action 280, a second left eye viewing vector 405 and a second right eye viewing vector 406 are computed, by the processing unit 33, based on the second left eye position 601 and the second right eye position 602.
In action 290, a second left eye view 45 and a second right eye view 46 are generated, by the processing unit 33, based on the second left eye viewing vector 405 and the second right eye viewing vector 406, respectively; where the second left eye view 45 and the second right eye view 46 overlap with each other to form a second overlapping region 47, the second left eye view 41 includes a second left graphic information of the object 4, and the second right eye view 46 includes a second right graphic information of the object 4.
In action 300, the image fusion processing is performed, by the processing unit 33, on the second left eye view 45 and the second right eye view 46 to render the second fused image 8.
In action 310, the second fused image 8 is displayed on the display device 32 when the viewer is at the second position 2000.
In one implementation, the image capturing module 31 captures images at several times and the processing unit 33 calculates the position of the viewer and generates the corresponding image to be displayed. In another implementation, the processing unit 33 detects a motion of the viewer, determines a motion vector (including a distance and a direction of the motion) when the motion of the viewer is detected, and then adjusts the first fused image in response to the motion vector. For instance, instead of performing actions 260-310, when the processing unit 33 detects that the viewer moves 10 cm to the right, the processing unit 33 adjust the first fused image by shifting 10 cm to the right. It is noted that the projection between the viewer's motion and the variation of the fused image may not be 1:1 projection.
In some implementations, the processing unit 33 tracks a gaze of the viewer, determines a gaze vector (including a variation of a distance and a direction of the gaze) when the gaze of the viewer is moved, and then adjusts the first fused image in response to the gaze vector. For instance, instead of performing actions 260-310, when the processing unit 33 detects that the gaze of the viewer is changed, the processing unit 33 calculates the gaze vector, and then adjust the fused image accordingly.
In the above embodiments, the object 4 is set as the origin of the coordinate system. However, in some other embodiments, there are multiple objects/items/information to be displayed, and each one maybe selectively displayed according to the sightlines of the viewer. In this case, the origin of the coordinate system may be set at a center of the virtual space 49 so that the left/right eye vectors of the viewer at the first position 1000 and the second position 2000 can be conveniently computed.
Besides the abovementioned facial features computation and image fusion processing, the image capturing module may further include a processor for performing image processing, such as High-dynamic-range (HDR) imaging, adjust the depth of field. In some other embodiments, the image capturing module transmits raw image data to the processing unit 33 to compute parameters, such as angle, distance or depth of field for rendering images.
The display system and method for displaying images of the present disclosure display images corresponding to the sightlines of the viewer, which provides the viewer with a more realistic visual effect similar to the real-life experience that the viewer observes any objects. Besides, since the displayed images varies with different sightlines of the viewer, more data contents of the object may be selectively displayed within a limited size or range of the display device, and thus the range of vision of the viewer may be extended substantially.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
107121038 | Jun 2018 | TW | national |
This patent application claims the benefit of U.S. provisional patent application Ser. No. 62/583,524, which is filed on Nov. 9, 2017, and incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62583524 | Nov 2017 | US |