This application is the national stage entry under 35 U.S.C. § 371 of International Application PCT/EP2015/063435 filed Jun. 16, 2015, which was published in accordance with PCT Article 21(2) on Dec. 23, 2015, in English, and which claims the benefit of European Application No. 14305923.6 filed Jun. 17, 2014. The European and PCT applications are expressly incorporated by reference herein in their entirety for all purposes.
The present disclosure generally relates to a display device with pixel repartition optimization. The display device may be a user wearable display device such as a head mounted display (HMD) device, but not be limited to such kind of display device.
These HMD devices are mainly composed of a display module (LCD or OLED, for example), and an optics. This optics is usually designed to modify the light as if it was generated at infinity or at finite but large distance (e.g. human eye hyper-focal distance) from the viewer (to allow the accommodation to a screen placed so close), and to increase the field of view to improve the sentiment of immersion.
These HMD devices may be coupled with a sensor such as inertial measurement unit (IMU) to measure the position of a user's head. Thanks to the sensor, the video content provided to the user through the display can depend on his/her head orientation, thus the user can move in a virtual world and feel a sentiment of immersion.
US 2012/0154277 A1 discloses to track user's head and eye position in order to determine a focal region for the user and to couple a portion of the optimized image to the user's focal region. However, any concepts for optimizing pixel repartition of an image to be presented on the display are not considered in US 2012/0154277 A1.
According to one aspect of the present disclosure, a method for presenting an image on a display device is provided. The method includes modifying the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image.
According to another aspect of the present disclosure, a display device for presenting an image comprising a processor is provided. The processor is configured to modify the image by applying a geometric transformation to the image so that an area of the image on the display device is presented to a viewer with higher density of pixels than that in the rest of the image.
The object and advantages of the present disclosure will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the disclosure, as claimed.
These and other aspects, features and advantages of the present disclosure will become apparent from the following description in connection with the accompanying drawings in which:
In the following description, various aspects of an exemplary embodiment of the present disclosure will be described. For the purpose of explanation, specific configurations and details are set forth in order to provide a thorough understanding. However, it will also be apparent to one skilled in the art that the present disclosure may be implemented without the specific details present herein.
In order to facilitate the understanding of the concept of an embodiment of the disclosure, some characteristics of the Human Visual System are first introduced.
The resolution of the display devices, generating images thanks to separated discrete pixels, is based on this visual acuity characteristic: depending on the distance of observation, the pixels cannot be perceived if their spatial frequency is higher than the separation capability of the eye.
The Human eye, in the fovea area, can approximately discriminate two points separated by one minute of arc angle. It means for instance that a 42″ HD (High Definition) screen 1920×1080 pixels, having a width of 93 cm, can be watched at a distance of 166 cm, to have a pixel density of 60 pixels per degree in its central part, corresponding approximately to the visual acuity.
Here, an example will be discussed in relation to an exemplary HMD device having a resolution of 1280×800 pixels. In this example, the horizontal field of view will be approximately 90° (or 110° depending on the sources), and the theoretical number of pixels per eye will be 640 (but likely closer to 500 practically): 500 pixels distributed over 90° implies a pixel density lower than 5 pixels per degree. That is 10 times lower than what should be required to respect Nyquist-Shannon sampling theorem for a visual acuity of one arc minute. Even if the next generation of this HMD should be based on a display having 1920×1080 pixels, the resolution should remain far below the visual acuity in the fovea area.
Inversely, the pixel density in the peripheral vision area is too high to be perceived by the human eye as visual acuity decreases strongly in the visual periphery.
This disclosure illustratively describes a display device such as HMD device increasing the pixel density in an area of the display, and decreasing this density in the peripheral areas of the area with increased pixel density. Since the eyes can move in the content provided by the display, this increase of density is not limited to the extremely narrow area corresponding to the fovea, but is applied to an area corresponding to the average eye movements, before moving the head. It means that the user can move his gaze around an area with high density information, and enjoy a feeling of immersion thanks to the large field of view providing sparse information.
In a HMD device coupled with an inertial measurement unit (IMU), the user can move his/her head to center of an object or area perceived with low resolution in periphery, to then dramatically increase the resolution on this object or area.
On a currently available HMD device such as the Oculus Rift HMD, the pixels generated by the display device are distributed by the optics with an almost constant density over the whole field of view. Since this optics is simple, it introduces a strong pincushion distortion that must be compensated by signal processing, by applying the inverse deformation to the video content to be displayed. The viewer can perceive a video or graphic immersive content with a large field of view, but with a poor resolution, even in the central area of the HMD.
What is called “foveation” is known in the technical field of image acquisition. Images can be acquired thanks to foveated sensors, where the density of photosites is higher in the central area of the sensor or with a standard sensor associated with a foveated lens.
Foveated imaging is also known in the technical field of signal processing, covering image compression, image transmission, or gaze contingent displays. For this last application, an eye tracker detects where the user is looking, and more information (image portions containing high frequencies) is dynamically displayed in this area than in the periphery (only low frequencies, blur).
It is proposed in this disclosure a system which is configured to (or adapted to) increase the pixel density in an area on the display, and decreasing the pixel density in the peripheral regions of the area on the display. However, it should be noted that, unlike foveated images where details (high frequencies) are displayed in the area of interest, and blurry information (or only low frequencies) are displayed in periphery with a constant pixel density, the proposed system modifies the pixel density itself.
The repartition of the pixels may be modified by the transformation T applied by the optics. In this case, the content to be displayed needs to be modified by the inverse geometric transformation T−1. Then, an increase of the pixel density increases the perceived luminance, and vice versa. This modification of luminance needs to be totally or partially compensated by the display or the signal processing applied to the images to be displayed.
A conventional optical element modifies the light as if it was generated at infinity (or at finite but large distance) from the viewer to allow accommodation by the viewer's eyes. It can also increase the field of view of the display to improve the feeling of immersion by the viewer.
In addition to the conventional art, the non-limitative embodiment of the present disclosure proposes that the optical element applies a distortion (as shown in
The pixel density perceived by the viewer is represented by a curve in
The position β where the perceived density is at a maximum value is here placed at ½ FOV (Field of View). This position can vary if the eye moves and be tracked by an eye tracking system on the HMD device. The optical element can be shifted left to right, up and down depending on the output from the eye tracking system output signal to align the maximum density region with the optical axis of a given eye. The image signal will be modified according to the new T/T−1 transforms associated with the modified optical configuration. When the eye is tracked and the optical element shifted accordingly, the value α can be decreased in the optical design, restricted to the fovea extension but excluding the eye motion margins considered in the static configuration.
The proposed solution has to be compared to the standard implementation in order to compare the relative perceived pixel density per angular sector.
The optical system is dimensioned for having a total field of view of about 95°, the object field width is 2×37.5 mm, which is the width of the screen.
In order to modify the perceived angular pixel densities, a lens needs to be set up in the vicinity of the object plane. That field lens shall map the pixels in the central part of the display around the optical axis with more angular density in the image space than outer pixels. The optical surface with such a property is necessarily an even asphere. It is a rotationally symmetric polynomial aspheric surface described by a polynomial expansion of the deviation from a spherical surface. The even asphere surface model uses only the even powers of the radial coordinate to describe the asphericity. The surface sag is given by:
where c is the curvature, r is the radial coordinate in lens units and k is the conic constant. The field lens needs to have both its front surface aspheric in order to modulate separately the different locations of the field points, and it also needs to have a rear aspheric surface for pointing each chief ray toward the entrance pupil of the optical system of the HMD device of the present embodiment.
The lens prescription is as follows:
Surface Data Summary:
Surface Data Detail:
As a by-side benefit provided by the configuration of the optical system, the MTF has been improved also.
Finally, in order to demonstrate that the optical system shown in
As shown in
The display device 100 also includes an optical component 120 and an actuator 115 to move the optical component 120. The optical component 120 may comprise the two optical elements designed as described above in connection with
The display device 100 is provided with an eye tracking sensor 130 to detect eye gaze point of the eyes of viewer. The eye tracking sensor 130 may be mounted on the upper or lower portion of the display module 105, for example so as to prevent any shading of display screen which may be caused by the sensor 130. Further, display device 100 is provided with a position sensor 145 such as inertial measurement unit (IMU) to measure the position of a viewer's head on which the display device 100 is mounted.
The display device 100 further includes a control module 140 to control the display module 105, actuators 115, the eye tracking sensor 130 and the position sensor 145. The control module 140 is connected to these elements via wired or wireless connection. The control module 140 is also connected to an external device (not shown) via wired or wireless connection. The external device stores images or videos to be provided to the display device 100. The images or videos are provided from the external device to the control module 140, then the control module 140 presents the received image or video on the display module 105.
The display device 100 may have a hood (not shown) surrounding the periphery of the display module 105 to provide a dark space in the field of view of the viewer, which may provide the viewer with better feeling of immersion.
As shown in
The module 200 further comprises a processor 230. The processor 230 performs to detect eye gaze point of the viewer based on the inputs from the eye tracking sensor 130, to activate the actuators 115 in response to the detected eye gaze point, to present images or videos received from the external device on the display module 105 and to scroll the images or videos displayed on the display module 105 in response to a viewer's head position detected and input by the position sensor 145.
The processor 230 further performs to modify the images or videos by applying a geometric transformation T−1(I) as described in reference to
At step S10, the control module 140; 200 of the display device 100 receives an immersive, image or video content from an external device (not shown) via its I/O interface 210. The received content is stored on the memory module 230. The immersive content may be a whole content available for the HMD device to display a 360° content (or less than 360°, but more than what can be displayed by the HMD's display screen), in other words, the immersive content may have wider area than the area of HMD's display screen. Thanks to such an immersive content, a viewer can be immersed in a virtual world displayed on the HMD device and can move his/her head to select a part of the whole 360° content he/she wants to see.
At step S12, the viewer's head position is detected by the position sensor 145, then at step S14, a part of the 360° content to be displayed on the HMD device is selected by the processor 220. The part of the 360° content that the viewer wants to see corresponds to the detected head position.
At step S16, eye gaze position of the eyes of viewer on which the display device 100 is mounted is determined by the eye tracking sensor 130. The detected information is output from the sensor 130 to the processor 220 of the control module 200. Based on the detected information, the processor 220 performs an analysis for specifying an eye gaze position of the eyes of viewer on the display module 105, in other words, for specifying which area on the display module 105 the viewer is watching. It should be noted that the step S16 can be performed during steps S10-S14.
Alternatively, at step S16, information of Region of Interest (ROI) in the content can be employed to determine eye gaze position of the eyes of viewer instead of the detection of eye gaze position by the eye tracking sensor 130. In this case, areas of ROI in the content (each image or each frame of video) can be determined in advance by test users or by any known dedicated ROI analysis software and associated with the content via metadata incorporated in the content. The areas of ROI in the content can be presumed eye gaze positions since the viewer most likely pay attention to the area of ROI in the content and thus eye gaze will be even more attracted by these ROIs.
At step S18, the processor 220 reads out the image or video stored in the memory module 230 and modifies the image or video so that an area having higher density information of the image or video can be formed at the specified eye gaze position on the display 105. The modification of the image or video may be performed by applying a geometric transformation which corresponds to the inverse transformation function applied by the optical components 120 (
At step S20, the processor 220 controls the actuators 115 to move the respective optical components 120 in response to the specified eye gaze position on the display 105 so that the viewer can see the display 105 through the optical components 120. For example, an association between eye gazes positions on the display 105 and their respective corresponding positions of the optical components 120 may be established in advance and stored on the memory module 230. In this case, the processor 220 causes the actuators 115 to move the optical components 120 to a position which corresponds to the detected eye gaze position according to the association.
Since the optical components 120 apply the distortion T(I1) which will compensate the transformation made on the image or video presented on the display 105, the image or video content perceived by the viewer through the optical components 120 has a higher density of information in the eye gaze position than that in the periphery of the eye gaze position.
The steps S12 through S20 may be repeated during the image or video content is presented on the display 105, which allows to change the dense information area on the image or video (the area of the image or video content having higher density of information than that in the rest of the content) and the positions of the optical components 120 in response to the detected eye gaze position in real time.
According the embodiment, the density of information available in the eye gaze area on the display 105 can be dramatically increased, thus more details can be provided in this area. On the other hand, since the visual acuity is much lower in peripheral vision, the feeling of immersion brought by a large field of view can be preserved.
Alternatively, the dense information area on the image or video and the positions of the optical components 120 may be fixed, for example, the dense information area on the image or video may be fixed in the central area of the image or video on the display 105 and positions of optical components 120 may be the corresponding positions. In this case, the above described steps S12 and S16 may be omitted.
Yet alternatively, at step S14, the processor 220 may modify the image or video directly received from the external device and present the modified image or video on the display device 105. In this case, the received image or video content may not be stored on the memory module 230.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
14305923 | Jun 2014 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2015/063435 | 6/16/2015 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/193287 | 12/23/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5726670 | Seiichiro et al. | Mar 1998 | A |
7417617 | Eichenlaub | Aug 2008 | B2 |
7495638 | Lamvik | Feb 2009 | B2 |
20020063807 | Margulis | May 2002 | A1 |
20020113782 | Verberne et al. | Aug 2002 | A1 |
20020141614 | Lin | Oct 2002 | A1 |
20040227703 | Lamvik et al. | Nov 2004 | A1 |
20060145945 | Lewis et al. | Jul 2006 | A1 |
20090102915 | Arsenich | Apr 2009 | A1 |
20090309811 | Hinton | Dec 2009 | A1 |
20120068913 | Bar-zeev et al. | Mar 2012 | A1 |
20120105310 | Sverdrup et al. | May 2012 | A1 |
20120154277 | Bar-Zeev et al. | Jun 2012 | A1 |
20120320463 | Shabtay | Dec 2012 | A1 |
20130016413 | Saeedi | Jan 2013 | A1 |
20130215147 | Hilkes et al. | Aug 2013 | A1 |
20130235169 | Kato et al. | Sep 2013 | A1 |
20130246967 | Wheeler | Sep 2013 | A1 |
20140085190 | Erinjippurath et al. | Mar 2014 | A1 |
Number | Date | Country |
---|---|---|
102540463 | Jul 2012 | CN |
103593051 | Feb 2014 | CN |
2073191 | Jun 2009 | EP |
H01252993 | Oct 1989 | JP |
H0638219 | Feb 1994 | JP |
H0713497 | Jan 1995 | JP |
H07104210 | Apr 1995 | JP |
H07134542 | May 1995 | JP |
H08237636 | Sep 1996 | JP |
2000310747 | Nov 2000 | JP |
2001281594 | Oct 2001 | JP |
2009510540 | Mar 2009 | JP |
2014071811 | Apr 2014 | JP |
20020089475 | Nov 2002 | KR |
20030007708 | Jan 2003 | KR |
20090067123 | Jun 2009 | KR |
2322771 | Apr 2008 | RU |
WO 2006118483 | Nov 2006 | WO |
WO2012082807 | Jun 2012 | WO |
WO2013009414 | Jan 2013 | WO |
WO2013090100 | Jun 2013 | WO |
WO 2014054210 | Apr 2014 | WO |
Entry |
---|
Eichenlaub et al., “Increased Resolution on an ICFLCD Display Through Field-Sequential Subpixel Illumination”, SID Symposium Digest of Technical Papers, vol. 29, No. 1, May 1998, pp. 411-414. Abstract. |
Harding et al., “Evaluation of the Microvision Helmet-Mounted Display Technology for Synthetic Vision Application Engineering Prototype for the Virtual Cockpit Optimization Program”, US Army Aeromedical Research Laboratory, Report 2004-02, Nov. 2003, pp. 1-29. |
Rilecke et al., “Selected Technical and Perceptual Aspects of Virtual Reality Displays”, Max Planck Institute for Biological Cybernetics, Technical Report 154, Oct. 2006, pp. 1-16. |
Hua et al., “A compact eyetracked optical see-through head-mounted display”, Proceedings of SPIE, Stereoscopic Displays and Applications XXIII, vol. 8288, Feb. 9, 2012, pp. 1-9. |
Kreylos, “An eye-tracked oculus rift”, Doc-ok.org, http://doc-ok.org/?p=1021, Jun. 2, 2014, pp. 1-21. |
Vockeroth et al., “The Combination of a Mobile Gaze-Driven and a Head-Mounted Camera in a Hybrid Perspective Setup”, IEEE International Conference on Systems, Man and Cybernetics, Montreal, Canada, Oct. 7, 2007, pp. 2576-2581. |
Anonymous, “How barrel distortion works on the Oculus Rift”, https://www.youtube.com/watch?v=B7qrgrrHry0, Jan. 19, 2014, pp. 1. |
PatBase English Language Translation, CN 103593051 A. |
PatDocs English Language Translation, JP H08237636 A. |
PatDocs English Language Translation, JP H0713497 A. |
PatDocs English Language Translation, JP H07134542 A. |
PatDocs English Language Translation, JP H07104210 A. |
PatDocs English Language Translation, JP H0638219 A. |
PatDocs English Language Translation, JP 2001281594 A. |
PatDocs English Language Translation, JP 2000310747 A. |
Number | Date | Country | |
---|---|---|---|
20170132757 A1 | May 2017 | US |