This invention relates generally to displaying data on display devices, and more particularly to non-uniform rescaling of input data for displaying on display devices such as organic light emitting diode display devices.
Electronic displays such as liquid crystal displays (LCD) and organic light emitting diode (OLED) displays can display images with high resolution. For example, televisions can display in high-definition television (HDTV at 1080 p) or ultra-high-definition television (UHDTV at 2160 p or “4K UHD”). As the native display resolution capability of the displays increases, the bandwidth required to drive the display at its native resolution also increases and can exceed the link limitations, consume excessive power, or cause unwanted latency.
A method for rescaling data to be displayed on a display device (e.g., an organic light emitting diode display device) of a head-mounted display (HMD) is disclosed. The HMD receives the data from a host system without exceeding the limits of the link between the host system and the HMD. One way of avoiding exceeding the link limitation is to scale down the resolution of a portion of the data that corresponds to the user's peripheral vision in the HMD, as opposed to a portion of data corresponding to the user's central vision of the display device. The received data is then rescaled at the HMD such that data corresponding to the whole display device is at full resolution as described below.
The method includes receiving a frame of data for displaying on the display device, where the received data includes a first portion of the data corresponding to a first pixel region at a first pixel resolution (e.g., native resolution of the OLED display deice) and a second portion of the data corresponding to a second pixel region, wherein the second portion of the data is at a second pixel resolution lower than the first pixel resolution. The method also includes rescaling the received data for displaying the received data at a native resolution of the display device, where the rescaling of the received data includes scaling the first portion of the data using a first scaling factor (e.g., 1.0) and the second portion of the data using a second scaling factor (e.g., 0.5). The method further includes providing the rescaled data for displaying on the display device.
In one embodiment, the received frame of data includes a mapping between the first pixel region and the first scaling factor, and the second pixel region and the second scaling factor. The scaling factors and their mapping to various pixel regions of the display device may be either fixed or may vary over time. Their variation over time may be based on where the user is looking at on the display device (e.g., using eye tracking) or based on the characteristics of content being displayed on the display device.
In one embodiment, the received frame of data may be compressed to reduce the size of the data, and the first scaling factor and the second scaling factor may be determined based on the properties of compression used for compressing the data.
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
The VR headset 105 is a head-mounted display that presents media to a user. Examples of media presented by the VR head set include one or more images, video, audio, or some combination thereof. In some embodiments, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the VR headset 105, the VR console 110, or both, and presents audio data based on the audio information. An embodiment of the VR headset 105 is further described below in conjunction with
The VR headset 105 includes an electronic display 115, an optics block 118, one or more locators 120, one or more position sensors 125, and an inertial measurement unit (IMU) 130. The electronic display 115 displays images to the user in accordance with data received from the VR console 110. In various embodiments, the electronic display 115 may comprise a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of the electronic display 115 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), a passive-matrix organic light-emitting diode display (PMOLED), some other display, or some combination thereof.
The electronic display 115 includes a display area comprising a plurality of pixels, where each pixel is a discrete light emitting component. An example embodiment of the pixel structure of electronic display 115 is described below with reference to
In various embodiments, the display area of the electronic display 115 arranges sub-pixels in a hexagonal layout, in contrast to a rectangular layout used by conventional RGB type systems. Moreover, some users are more comfortable viewing images which appear to have been generated via a rectangular layout of sub-pixels.
The optics block 118 magnifies received light, corrects optical errors associated with the image light, and presents the corrected image light to a user of the VR headset 105. An optical element may be an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, or any other suitable optical element that affects the blurred image light. Moreover, the optics block 118 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optics block 118 may have one or more coatings, such as anti-reflective coatings.
Magnification of the image light by the optics block 118 allows the electronic display 115 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase a field of view of the displayed media. For example, the field of view of the displayed media is such that the displayed media is presented using almost all (e.g., 110 degrees diagonal), and in some cases all of the user's field of view.
The locators 120 are objects located in specific positions on the VR headset 105 relative to one another and relative to a specific reference point on the VR headset 105. A locator 120 may be a light emitting diode (LED), a corner cube reflector, a reflective marker, a type of light source that contrasts with an environment in which the VR headset 105 operates, or some combination thereof. In embodiments where the locators 120 are active (i.e., an LED or other type of light emitting device), the locators 120 may emit light in the visible band (˜380 nm to 750 nm), in the infrared (IR) band (˜750 nm to 1 mm), in the ultraviolet band (10 nm to 380 nm), some other portion of the electromagnetic spectrum, or some combination thereof
In some embodiments, the locators 120 are located beneath an outer surface of the VR headset 105, which is transparent to the wavelengths of light emitted or reflected by the locators 120 or is thin enough to not substantially attenuate the wavelengths of light emitted or reflected by the locators 120. Additionally, in some embodiments, the outer surface or other portions of the VR headset 105 are opaque in the visible band of wavelengths of light. Thus, the locators 120 may emit light in the IR band under an outer surface that is transparent in the IR band but opaque in the visible band.
The IMU 130 is an electronic device that generates fast calibration data based on measurement signals received from one or more of the position sensors 125. A position sensor 125 generates one or more measurement signals in response to motion of the VR headset 105. Examples of position sensors 125 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 130, or some combination thereof. The position sensors 125 may be located external to the IMU 130, internal to the IMU 130, or some combination thereof.
Based on the one or more measurement signals from one or more position sensors 125, the IMU 130 generates fast calibration data indicating an estimated position of the VR headset 105 relative to an initial position of the VR headset 105. For example, the position sensors 125 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, roll). In some embodiments, the IMU 130 rapidly samples the measurement signals and calculates the estimated position of the VR headset 105 from the sampled data. For example, the IMU 130 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on the VR headset 105. Alternatively, the IMU 130 provides the sampled measurement signals to the VR console 110, which determines the fast calibration data. The reference point is a point that may be used to describe the position of the VR headset 105. While the reference point may generally be defined as a point in space; however, in practice the reference point is defined as a point within the VR headset 105 (e.g., a center of the IMU 130).
The IMU 130 receives one or more calibration parameters from the VR console 110. As further discussed below, the one or more calibration parameters are used to maintain tracking of the VR headset 105. Based on a received calibration parameter, the IMU 130 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, certain calibration parameters cause the IMU 130 to update an initial position of the reference point so it corresponds to a next calibrated position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the determined estimated position. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to “drift” away from the actual position of the reference point over time.
The imaging device 135 generates slow calibration data in accordance with calibration parameters received from the VR console 110. Slow calibration data includes one or more images showing observed positions of the locators 120 that are detectable by the imaging device 135. The imaging device 135 may include one or more cameras, one or more video cameras, any other device capable of capturing images including one or more of the locators 120, or some combination thereof. Additionally, the imaging device 135 may include one or more filters (e.g., used to increase signal to noise ratio). The imaging device 135 is configured to detect light emitted or reflected from locators 120 in a field of view of the imaging device 135. In embodiments where the locators 120 include passive elements (e.g., a retroreflector), the imaging device 135 may include a light source that illuminates some or all of the locators 120, which retro-reflect the light towards the light source in the imaging device 135. Slow calibration data is communicated from the imaging device 135 to the VR console 110, and the imaging device 135 receives one or more calibration parameters from the VR console 110 to adjust one or more imaging parameters (e.g., focal length, focus, frame rate, ISO, sensor temperature, shutter speed, aperture, etc.).
In some embodiments, the imaging device 135 and the locators 120 may function as a positional tracking system that may track the position of the one or more locators 120 and reports to the VR console 110. The imaging device 135 may include one or more sensors (e.g., focal plane array including an array of light sensing pixels) that track the position of the locators 120 and report their positional information to the VR console 110, and the imaging device 135 receives one or more calibration parameters from the VR console 110 to adjust one or more imaging and/or sensing parameters.
The VR input interface 140 is a device that allows a user to send action requests to the VR console 110. An action request is a request to perform a particular action. For example, an action request may be to start or end an application or to perform a particular action within the application. The VR input interface 140 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the received action requests to the VR console 110. An action request received by the VR input interface 140 is communicated to the VR console 110, which performs an action corresponding to the action request. In some embodiments, the VR input interface 140 may provide haptic feedback to the user in accordance with instructions received from the VR console 110. For example, haptic feedback is provided when an action request is received, or the VR console 110 communicates instructions to the VR input interface 140 causing the VR input interface 140 to generate haptic feedback when the VR console 110 performs an action.
The VR console 110 provides media to the VR headset 105 for presentation to the user in accordance with information received from one or more of: the imaging device 135, the VR headset 105, and the VR input interface 140. In the example shown in
The application store 145 stores one or more applications for execution by the VR console 110. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the HR headset 105 or the VR interface device 140. Examples of applications include: gaming applications, conferencing applications, video playback application, or other suitable applications.
The tracking module 150 calibrates the VR system 100 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the VR headset 105. For example, the tracking module 150 adjusts the focus of the imaging device 135 to obtain a more accurate position for observed locators on the VR headset 105. Moreover, calibration performed by the tracking module 150 also accounts for information received from the IMU 130. Additionally, if tracking of the VR headset 105 is lost (e.g., the imaging device 135 loses line of sight of at least a threshold number of the locators 120), the tracking module 140 re-calibrates some or all of the system environment 100.
The tracking module 150 tracks movements of the VR headset 105 using slow calibration information from the imaging device 135. The tracking module 150 determines positions of a reference point of the VR headset 105 using observed locators from the slow calibration information and a model of the VR headset 105. The tracking module 150 also determines positions of a reference point of the VR headset 105 using position information from the fast calibration information. Additionally, in some embodiments, the tracking module 150 may use portions of the fast calibration information, the slow calibration information, or some combination thereof, to predict a future location of the headset 105. The tracking module 150 provides the estimated or predicted future position of the VR headset 105 to the VR engine 155.
The VR engine 155 executes applications within the system environment 100 and receives position information, acceleration information, velocity information, predicted future positions, or some combination thereof of the VR headset 105 from the tracking module 150. Based on the received information, the VR engine 155 determines content to provide to the VR headset 105 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the VR engine 155 generates content for the VR headset 105 that mirrors the user's movement in a virtual environment. Additionally, the VR engine 155 performs an action within an application executing on the VR console 110 in response to an action request received from the VR input interface 140 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the VR headset 105 or haptic feedback via the VR input interface 140.
The locators 120 are located in fixed positions on the front rigid body 205 relative to one another and relative to a reference point 215. In the example of
The PC or host system 310 provides input data that is displayed on the OLED display panel 380 after appropriate rescaling or interpolation. The PC 310 provides input data to the data receive module 320, where the input data might not have the same resolution (also referred to as display resolution or pixel resolution) for an entire frame of input data. For a given frame of input data, for example, the input data includes a first resolution (e.g., a native resolution of the OLED display panel 380) for a first portion of the frame of input data corresponding to a first pixel region (e.g., region 425 of
The received input data is interpolated at the interpolation core module 340 to convert the input data into full resolution data before sending to the panel driver 350 for displaying on the OLED display panel 380. The input data received at the data receive module 320 is input data that includes certain portions of data at a resolution lower than the full resolution of the OLED display panel 380 on a per frame basis.
The interpolation core module 340 receives the input data and performs interpolation to convert the input data to full resolution data. For example, when the OLED display panel 380 is being displayed at 1080 p resolution, the input data sent to the interpolation core module 340 may contain portions of data that is at a resolution lower than 1080 p (e.g., 720 p). In one embodiment, rescale factors for interpolating the input data to full resolution are set on a per-row basis (i.e., one or more rows at a time), per-column basis (i.e., one or more columns at a time), per-region basis (e.g., as shown in
Input data corresponding to other regions that are mapped to other portions of the panel 380 have rescaling factors lower than 1.0 such as 0.5 (e.g., region 410) and 0.25 (e.g., region 405). Accordingly, input data corresponding to region 410 and 405 are interpolated with their respective rescaling factors before being sent to the panel driver 350. Interpolation can include either a simple interpolation of replicating an adjacent pixel with the value of the preceding pixel. Alternatively, interpolation can include other techniques such as linear interpolation, bilinear interpolation, spline interpolation, and the like. The rescaling factors are configurable and can be configured based on the interpolation technique used.
In one embodiment, the highest resolution of data displayed on the display panel may be equal to the full resolution (i.e., native resolution) of the OLED display panel 380. In other embodiments, the highest resolution of data displayed on the display panel need not be equal to the full resolution (i.e., native resolution) of the OLED display panel 380 but a resolution that is lower than the full resolution. For example, when the full resolution is 1920×1080 pixels, the high resolution may be 1280×720 pixels, which is lower than the full resolution. For this example, the highest rescaling factor would be less than 1.0 and would be based on a comparison between the actual resolution (i.e., 1280×720 pixels) and the full resolution (i.e., 1920×1080 pixels).
In some embodiments, the input data provided by the PC 310 to the data receive module 320 includes a mapping between the different regions of the OLED display panel 380 and their corresponding rescaling factors. For example, the input data includes a lookup table that provides mapping between the nine regions of the display panel in
The rescaling factors may be determined in one or more of at least three different methods such as a static method based on the physical properties of human eye and the display panel, a dynamic method using eye tracking of the viewer of the headset, and a content-based method depending on the content being displayed. In the static method, a mapping between the various regions of the display panel and their corresponding rescaling factors is fixed and does not change with time. For example, the data corresponding to the regions of
In the dynamic method, the rescaling factors are dynamically computed based on where the user is looking at, which may be determined by using eye tracking. In one embodiment, a region (either radial or grid-based) of the OLED display panel 380 with full resolution can be dynamically changed using an eye tracking device. In one embodiment, the eye tracking device comprises a camera (e.g., sensor 390) that is mounted within the VR headset (either within or outside of the OLED display panel 380). A user's fovea is responsible for sharp central vision (also called foveal vision), which is necessary for human activities such as reading and viewing images or video, where visual detail is of primary importance. The user's fovea can be tracked using an eye tracker such that the region of the OLED display device 380 that the user's is looking at (by tracking the user's fovea) can always be presented with data at full resolution and the other portions of the display can be presented with rescaled data to minimize the latency and bandwidth needed for transmitting display data. For example, the full resolution regions (e.g., region 425 in
In the content-based method, the rescaling factors are determined based on the content being displayed on the OLED display panel 380. As the PC 310 device is aware of the characteristics of the content that is being rendered for display on the OLED display panel 380, the PC 310 is able to compute the rescaling factors for the regions on a per frame basis before sending the data to the data receive module 320. For example, if the PC 310 determines that regions 425 and 430 of
In some embodiments, the data corresponding to boundaries between one region and another region of the OLED display panel 380 is smoothed to perform a gradual transition between the regions. For example, regions 425 and 430 of
In some embodiments, the input data received at the data receive module 320 may be compressed to reduce the size of data instead of being scaled at different resolutions for different regions as described above in conjunction with
The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosed embodiments are intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/132,360, filed Mar. 12, 2015, which is incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62132360 | Mar 2015 | US |