The invention relates to a display system and a method, and more particularly, to a display system and a method of displaying images on the limited range of a display device.
Human's field of view has a limited range of visual field (including a horizontal visual angle and a vertical range visual angle). To expand visual fields, we have to constantly change viewing angles as well as viewing directions. For example, assuming a vehicle is parked in front of a viewer in the real world, from the place where the viewer stands, he/she may only see the front side of the car because of the limited scope of the field of view. However, when the viewer moves the position to the right where he/she can view the same vehicle from the right (to the left), the viewer can therefore see a partial front side and a partial lateral side of the vehicle car. That is, by changing the viewing angle and direction the field of view can be expanded indefinitely in the real world.
Nonetheless, the situation would be different when it comes to images displayed on a display device. Given the limited size of display devices, images can only be presented in conforming with the size of a display device. Consequently, the information can be displayed is also restricted.
Besides, conventional display adopts a perspective transform to flat a 3D object into a 2D format. However, images presented on conventional screens are static. That is, an image remains unchanged no matter where the viewer is. The viewing experience is different to that in the real world.
According to one aspect of the present disclosure, a display system is provided. The display system includes an image capturing module, a processing unit, and a display device. The image capturing module is configured to capture a head image of a viewer. The processing unit is coupled to the image capturing module and configured to perform the following instructions. A head vector is computed based on the head image. A left eye position and a right eye position are computed based on a face image of the viewer. A left eye field of view and a right eye field of view are generated based on the head vector, the left eye position and the right eye position. A binocular stereoscopic field of view is generated based on the left eye field of view and the right eye field of view. The display device coupled to the processing unit, and configured to display an image in the binocular stereoscopic field of view.
According to another aspect of the present disclosure, a display system is provided. The display system includes an image capturing module, a processing unit, and a display device. The image capturing unit is configured to capture a first head image of the viewer at a first position and a second head image of the viewer at a second position. The processing unit is coupled to the image capturing module and configured to perform the following instructions. A first head vector of the viewer at the first position is computed based on the first head image. A first facial image of the viewer is obtained and a first left eye position and a first right eye position of the viewer are computed based on the first facial image. A first left eye field of view, a first right eye field of view and a first binocular stereoscopic field of view are generated based on the first left eye position, the first right eye position and the first head vector. A second head vector of the viewer at the second position is computed based on the second head image. A second facial image of the viewer is obtained and a second left eye position and a second right eye position of the viewer are compute based on the second facial image. A second left eye field of view, a second right eye field of view and a second binocular stereoscopic field of view are generated, based on the second left eye position, the second right eye position and the second head vector. The display device displays a first image in the first binocular stereoscopic field of view when the viewer is at the first position; and displays a second image in the second binocular stereoscopic field of view when the viewer is at the second position.
According to yet another aspect of the present disclosure, a method for displaying a navigation map including geographic data and information is provided. The method includes the following actions. The geographic data and information are stored in a database. A first image including a first geographic data and information is displayed, by the display device, when the viewer is at the first position. A second image including a second geographic data and information is displayed, by the display device, upon determining that the viewer has moved from the first position to a second position.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
In this disclosure, a directional terminology, such as “top”, “bottom”, “front”, “back”, “left”, “right”, is used with reference to the orientation of the Figure(s) being described. However, the components of the present disclosure may be positioned in several different orientations. As such, the directional terminology is used for illustration purposes only. Accordingly, the drawings and descriptions will be regarded as illustrative in nature and not as restrictive.
In the present disclosure, a display system and a method for displaying images on a display system are provided to generate an image according to a sightline of a viewer. Via the display system, an appearance of the object presented to the viewer may vary with the sightline of the viewer as if the object was observed in the real world, which gives the viewer a more realistic user experience. In addition, various displayed images may be provided according to various sightlines of the viewer so as to expand the field of view of the viewer.
The displaying device 32 is disposed inside the cabin 20. The display device is configured to display a fused image. The displaying device 32 may be, but not limited to, a digital vehicle instrument cluster, a central console panel, or a head-up display.
The processing unit 34 is coupled to the image capturing module 30 and the displaying device 32. The processing unit 34 may be an intelligent hardware device, such as a central processing unit (CPU), a microcontroller, or an ASIC. The processing unit 34 may process data and instructions. In this embodiment, the processing unit 34 is an automotive electronic control unit (ECU).
As previously mentioned, conventional display devices present images dully. An image displayed on a conventional displayer will not change in any viewing direction. From viewers' perspectives, the field of view is constant. On the other hand, the image provided in accordance with the instant disclosure may change with different sightlines of a viewer. Therefore, a field of view of the viewer may be expanded even though the display area is fixed.
In the present disclosure, the images provided by the display system 3 may change with the sightline of the viewer. The displaying device 32 provides a visual effect that a 3D object is placed in a virtual space. Because of the visual effect, when viewing the 3D object on the displaying device 32, it feels that the 3D object is extended within virtual space. Therefore, the displaying device 32 may present any aspect of the 3D object as an image to the viewer as if it is a real object in the real world even though the displaying device 32 is a flat display device. Furthermore, content with different depths may also be displayed in the virtual space. Moreover, based on the sightline of the viewer, the same content (such as a map including geographic data and information) may be presented to the view in different ways or at different positions within the virtual space.
Conventionally, when a viewer is looking at a screen, given the size limitation, the screen can only present a navigation map having partial geographic data and neighborhood information to a viewer. For instance, the viewer can only see a limited range of a map on the screen. Roads and buildings are the edges of the screen are cut. In order to get further information, the viewer must manually either zoom in, zoom out, move or drag the map which is impossible and danger when the viewer is driving.
On the other hand, the display system 3 of the present disclosure comprehensively preserves the entire geographic data and neighborhood information. For instance, as shown in
In all, the display system 3 of the present disclosure determines the sightline of the viewer and change the displayed content accordingly. Through the operation of the present disclosure, more contents can be showed a displaying device which size is limited. More precisely, as illustrated in
In another example, as shown in
Following the above example and as shown in
A system and a method for displaying images on the display system 3 is described as follows with reference to
The relative positions of the viewer, the displaying device 32 and the displayed image M according to an embodiment of the present disclosure are also illustrated in
Firstly, the image capturing module 30 captures a head image 42 of the viewer's head 40.
In this embodiment, a coordinate system is established by the processing unit 34, where an origin of the coordinate system may be set at any point, and the position of the viewer is obtained and recorded with reference to the coordinate system. The processing unit 34 obtains the position (e.g., a head position or an eye position) of the viewer using 3D sensing technologies. For instance, the image capturing module 30 may be a stereo camera (with two or more lens) used for obtaining the position of the viewer. In some other implementations, the image capturing module 30 includes a depth sensor used for obtaining the position of the viewer.
Since the image capturing module 30 is a fixture on (or nearby) the displaying device 32, and the viewer is seated in the cabin 20, a position of the image capturing module 31 is known and invariant. The position of the cabin 20 is also known to the processing unit 34. Therefore, based on the positions of the image capturing module 30 and the viewer, the processing unit 34 computes a head vector R from the viewer's head 40 to the displaying device 32. The head vector R indicates a position of the viewer's head 40 and includes the distance D1 between the viewer's head 40 and the displaying device 32.
Based on the head image 42 shown in
Next, the processing unit 34 computes a left eye position and a right eye position of the viewer based on the facial image 44.
Next, the processing unit 34 generates a left eye field of view LFOV, a right eye field of view RFOV and the binocular stereoscopic field of view BFOV based on the head vector R, the left eye position and the right eye position.
In addition, the processing unit 34 computes a left eye rendered image LRI based on the left eye field of view LFOV, and a right eye rendered image RRI based on the right eye field of view RFOV. And then, the processing unit 34 computes the image PC (as shown in FIG. 3) based on the left eye rendered image LRI and the right eye rendered image RRI.
In one implementation where the display system 3 is applied to a navigation, the display system 3 may include a database configured to store geographic data and information. The display system 3 may acquire, from the database, the geographic data and information according to the sightline of the viewer and display the corresponding contents on the displaying device 32. In another implementation, only parts of the geographic data and information are stored in the database. When any geographic data and information is required to be shown on the displaying device 32, the processing unit 34 may perform real-time computation to obtain the necessary content and display thereof.
As shown in
In action S100, the image capturing module 30 captures a head image of the viewer.
In action S101, the processing unit 34 computes a head vector R based on the head image.
In action S102, the processing unit 34 computes a left eye position and a right eye position of the viewer based on the facial image and facial features of the viewer. The facial image is obtained from the head image by the processing unit 34 or captured by the image capturing module 30.
In action S103, the processing unit 34 generates a left eye field of view, a right eye field of view and a binocular stereoscopic field of view based on the head vector, the left eye position and the right eye position.
In action S104, the processing unit 34 computes a left eye rendered image based on the left eye field of view.
In action S105, the processing unit 34 computes a right eye rendered image based on the right eye field of view.
Inaction S106, the processing unit 34 computes an image based on the left eye rendered image and the right eye rendered image.
In action S107, the displaying device 32 displays the image in the binocular stereoscopic field of view.
In another embodiment, the display system 3 of the present disclosure may provide various display contents according to the sightline of the viewer. In one implementation, the sightline of the viewer may be determined according to the position of the viewer's head 40 (e.g., represented by the head vector R). In some implementations, the sightline of the viewer may be determined according to the viewer's facial features. For example, the display system 3 may perform image processing on the captured head image 42 and facial image 44 to obtain the positions of the viewer's head 40, the viewer's left eye 401 and the viewer's right eye 402. Accordingly, the processing unit 34 computes a left eye field of view LFOV, a right eye field of view RFOV and renders an image PC for the displaying device 32 to display thereon. As iterated, the image PC is presented in the viewer's binocular stereoscopic field of view BFOV corresponding to the positions of the head 40, the left eye 401 and the right eye 402.
Since the image capturing module 30 is a fixture on (or nearby) the displaying device 32, a capturing angle and a capturing range of the image capturing module 30 for capturing images is invariant. Therefore, the head image 42 or the facial image 44 captured by the image capturing module 30 varies with different positions of the viewer.
For example, as shown in
On the other hand, as shown in
It should be noted that, the position of the image capturing module 30 is selected to be the origin of the coordinate system referenced in the display system 3 for computing the displacement vector (i.e., a distance and a direction of the movement of the viewer). In some other embodiments, the displacement vector may not only include x component but also y and/or z components when the viewer moves his/her head forward/backward and/or upward/downward.
Based on the above, assuming the viewer is in a first position at a first time (as shown in
Alternatively, as shown in
To sum up, the display system 3 of the present disclosure visually establishes a virtual space. In effect, when the viewer moves left and turns his/her sightline to right, the viewer observes the right corner of the virtual space. In addition, when the viewer moves right and turns his/her sightline to left, the viewer observes the left corner of the virtual space. By utilizing the virtual space, the display system can not only display additional contents despite the size limitation of a screen, but also changes the contents in accordance with the perspectives of the viewer. In one implementation, the abovementioned display system and the method may be applied to a navigation including geographic data and information. In one instance, when the viewer is at the second position X1 and watches the displaying device 32 as illustrated in
In some embodiments, the viewer may zoom in or zoom out on the displayed image.
In action S200, the image capturing module 30 captures a first head image when the viewer is at the first position.
In action S201, the processing unit 34 computes a first position of the viewer. For instance, the first position is represented by a first head vector based on the first head image, where the first head vector includes a first distance between the viewer's head and the displaying device 32.
In action S202, the processing unit 34 computes a first binocular stereoscopic field of view based on the first head image.
In action S203, the displaying device 32 displays a first image in the first binocular stereoscopic field of view when the viewer is at the first position.
In action S204, the image capturing module 30 captures a second head image when the viewer is at the second position.
In action S205, the processing unit 34 computes a second position of the viewer. For instance, the second position is represented by a second head vector based on the second head image, where the second head vector includes a second distance between the viewer's head and the displaying device 32.
In action S206, the processing unit 34 computes a second binocular stereoscopic field of view based on the second position.
In action S207, the displaying device 32 displays a second image in the second binocular stereoscopic field of view when the viewer is at the second position.
In some embodiments, the first position and the second position are further determined according to a left eye position and a right eye position of the viewer. In this embodiment, the method further includes the procedure as shown in
In action S300, the processing unit 34 obtains a first facial image, where the first facial image is obtained from the first head image or captured by the image capturing module 30.
In action S301, the processing unit 34 computes a first left eye position and a first right eye position of the viewer based on the first facial image.
In action S302, the processing unit 34 computes a first left eye field of view and a first right eye field of view based on the first head vector, the first left eye position and the first right eye position; and the processing unit 34 obtains a first binocular stereoscopic field of view, which is the combination of the first left eye field of view and the first right eye field of view.
In action S303, the processing unit 34 obtains a second facial image, where the second facial image is obtained from the second head image or captured by the image capturing module 30.
In action S304, the processing unit 34 computes a second left eye position and a second right eye position of the viewer based on the second facial image.
In action S305, the processing unit 34 computes a second left eye field of view and a second right eye field of view based on the second head vector, the second left eye position and the second right eye position; and the processing unit 34 obtains a second binocular stereoscopic field of view, which is the combination of the second left eye field of view and the second right eye field of view.
In some other embodiments, the method further includes procedures as shown in
In action S400, the processing unit 34 computes a first left eye rendered image based on the first left eye field of view.
In action S401, the processing unit 34 computes a first right eye rendered image based on the first right eye field of view.
In action S402, the processing unit renders the first image based on the first left eye rendered image and the first right eye rendered image.
In action S403, the processing unit 34 computes a second left eye rendered image based on the second left eye field of view.
In action S404, the processing unit 34 computes a second right eye rendered image based on the second right eye field of view.
In action S405, the processing unit renders the second image based on the second left eye rendered image and the second right eye rendered image.
Based on the above, by implementation of the parallax between the left and right eyes, the range of the field of view may be increased.
Based on the above, through the operation of the virtual space, the displaying device of the present disclosure is able to display additional contents that conventional screen cannot achieved. For example, by simply changing the sightline, the viewer may observe more contents on a 12.3″ screen without performing complicated operations manually.
The display system of the present disclosure may capture a head image of the viewer, compute a binocular stereoscopic field of view based on the head image, and display an image in the binocular stereoscopic field of view accordingly. Therefore, the display system of the present disclosure displays images in accordance with the human binocular vision. Moreover, the display system of the present disclosure renders images based on different positions of the viewer; each of the images rendered corresponds to the relative position between the viewer and the displaying device.
The above actions are discussed in order but the present disclosure may also be achieved by the same steps with a different order, or by additional steps.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
107121103 | Jun 2018 | TW | national |
This patent application claims the benefit of U.S. provisional patent application Ser. No. 62/583,524, which was filed on Nov. 9, 2017, and incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62583524 | Nov 2017 | US |