The present disclosure relates to the transmission of a framebuffer in an image display system arranged for displaying streamed images, such as a VR/AR system.
Image data are often transported in framebuffers across a digital display interface. Known such interfaces include DisplayPort and MIPI. Normally, the pixels are streamed one image line at a time at a rate of 60-90 Hz. Current systems are well adapted to the display of video streams, but VR/AR systems typically require a much higher resolution, which leads to higher requirements on the transporting methods.
An object of the present disclosure is to enable faster transmission of image data over a digital display interface.
This is achieved according to the present disclosure by a method of transmitting image data over a digital display interface in an image display system, comprising dividing the image data into framebuffers, each framebuffer comprising a number of pixels to be displayed as an image on a display unit in the image display system,
The disclosure also relates to an image display system arranged to display an image to a user, said image display system comprising a display stream source arranged to provide a stream of image data, an encoder arranged to convert the stream of image data into a packed display stream, a decoder arranged to receive and decode the framebuffers and forward the decoded display stream to the display, wherein
Hence, according to the disclosure, the time to transmit a full framebuffer is reduced by reducing the amount of pixel data that is to be transmitted in some areas of the image, in particular in areas of the image that are outside of the viewer's focus. This means that foveation is utilized to reduce the amount of image data needed in parts of the image that are outside the area that the user is focusing on.
The disclosure also relates to a computer program product for performing the methods disclosed in this document, and a framebuffer package comprising image data to be displayed as an image in an image display system, said framebuffer package comprising at least a first and a second block, each block comprising pixel data to be displayed in an area of the image, the first block comprising pixel data stored with a first resolution and the second block comprising second pixel data stored with a second resolution, which is lower than the first resolution.
The devices and methods discussed in this disclosure are useful for any type of image display system for displaying streamed images, in which it is advantageous to vary the resolution in the images. In particular, this applies to VR/AR systems.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
The various aspects of the invention are particularly useful in image display systems in which the images may suitably be displayed with varying resolution. One example of such systems is a VR/AR system in which foveation is often applied to display a higher resolution in the central part of the image compared to the peripheral parts.
An aspect of this disclosure relates to a method of transmitting image data over a digital display interface in an image display system, comprising dividing the image data into framebuffers, each framebuffer comprising a number of pixels to be displayed as an image on a display unit in the image display system,
The disclosure also relates to an image display system arranged to display an image to a user, said image display system comprising a display stream source arranged to provide a stream of image data, an encoder arranged to convert the stream of image data into a packed display stream, a decoder arranged to receive and decode the framebuffers and forward the decoded display stream to the display, wherein
The disclosure also relates to a framebuffer package comprising image data to be displayed as an image in an image display system, said framebuffer package comprising at least a first and a second block, each block comprising pixel data to be displayed in an area of the image, the first block comprising pixel data stored with a first resolution and the second block comprising second pixel data stored with a second resolution, which is lower than the first resolution.
The framebuffer package may further comprise metadata, said metadata comprising instructions on how to decode the image data in the framebuffer package, in particular information about where in the image the data from each block is to be displayed and with what amount of upscaling. In this case, the method comprises the step of including metadata in the framebuffer, said metadata comprising instructions on how to decode each stripe. Preferably, in this case, the encoder is arranged to include metadata in each framebuffer, said metadata comprising instructions on how to decode the image data in the framebuffer package, in particular information about where in the image the data from each block is to be displayed and with what amount of upscaling. Including the metadata related to a framebuffer in the framebuffer itself allows an efficient way of providing the metadata to the decoder. Alternatively, the metadata may be transferred from the encoder to the decoder separately from the framebuffer by any suitable means.
Typically, the first block holds pixel data of the first resolution to be displayed in a first part of the image, the stripe further comprising two additional blocks having the second resolution to be displayed on opposite sides of the first part of the image. In preferred embodiments, the frame comprises two further blocks having a third resolution which is lower than the second resolution, to be displayed on opposite sides of the additional blocks. As will be understood, the areas of different resolutions may be arranged in any suitable way.
The resolution to be used in each of the blocks may be selected freely. In some embodiments, the first resolution is the same as the resolution in a source image and the second resolution is ¼ of the first resolution. This allows the maximum available resolution to be maintained in the most important part or parts of the image. If there are also blocks having a third resolution, this third resolution may be 1/16 of the first resolution.
The disclosure also relates to a computer program product for controlling the transfer of image data across a digital picture interface, said computer program product comprising computer-readable code means which when run in a processor controlling an image display system will cause the system to perform the transfer according to the methods as discussed in this disclosure.
The blocks are transmitted in the same way as any image data, which means that standard compression methods can be applied to the blocks, including display stream compression (DSC).
A decoder is arranged to receive the packed display stream and the metadata and to unpack the framebuffers according to the instructions in the metadata. This involves, for any block that has a reduced resolution, to add pixels to make up for the reduced number of pixels in the block. How many pixels to add depends on how much the resolution is reduced. For example, three pixels may be added, to convert one pixel into a block of 2×2 pixels, or 15 pixels may be added, to convert one pixel into a block of 4×4 pixels. The image data, decoded and with the added pixels to make up for the reduced resolution in some parts of the image, are then displayed on the display unit.
In this example, block A holds image data of the highest resolution, which is intended to be displayed in the area of the image that the user is focusing on, for example, in the middle of the image. Blocks B and C hold image data of the second highest resolution, intended to be displayed in areas adjacent to the image data of block A, typically on either side. Blocks D and E hold image data to be displayed in areas adjacent to the image data of block B and C, respectively, and blocks F and G hold image data of the lowest resolution to be displayed near the edges of the image. There is also empty space in this framebuffer, after block G, which can be used to transmit control data.
It follows that metadata must be provided, to give instructions on how to display the image data of the framebuffer. Specifically, the instructions should detail for each block which part of the image the image data from this block should be displayed in, and also how much the image data should be upscaled, that is, how many pixels in the image should be covered by each pixel in the block. This is reflected in
The metadata describing how each stripe is to be decoded can be either embedded into the source signal itself, as part of the image data, transferred as part of display data channel/extended display identification data (DDC/EDID), or sent beforehand via some other channel such as USB. In the example of
Preferably, when generating the framebuffer as shown in
As will be understood, the number of blocks and the size of each block in
The number of stripes, and their sizes may be selected freely. In the example it is assumed that each stripe has 16 image lines. It would be possible to let each strip consist of one line although this would limit how low the lowest resolution could be.
In the extreme case, the framebuffer is not divided into several stripes, but the whole image is treated as one stripe including a number of blocks.
In a first step S41 a stream of image data is received from a display stream source 13 in an encoder 15. In a second step S42 the stream is packed into framebuffers as discussed above in connection with
In step S43, the framebuffers are divided into stripes, in the encoder. The examples above include 16 stripes, but it is possible to use only one stripe for the whole image. As discussed above, the stripe may comprise a suitable number of lines.
In step S44, each stripe is divided into blocks, each block comprising image data stored with a particular resolution.
In step S45, the framebuffers, divided according to steps S43 and S44, are transmitted to the decoder 17.
In step S46, the decoder 17 unpacks the image data. This includes upscaling data in some or all of the blocks in dependence of the resolution. In the example of
Finally, in step S47, the image data including data from all the blocks, upscaled according to step S46 are displayed on the display unit 11.
Each packed stripe must be buffered into the FPGA entirely before the reconstruction can start. For a 3 k display this would be around 60 kilobytes of random access memory (RAM) (3072 pixels with 5:1 compression, 3 bytes per pixel, 16 scanlines per stripe, double buffering for stripes).
The locations of each block in the unpacked image can be different each frame and each stripe, depending on where the full resolution area should be located on the display in each frame. There are a few options on how to achieve this, depending on the FPGA gate budget:
Similarly, there are a few options on how flexible the block configuration can be:
Number | Name | Date | Kind |
---|---|---|---|
5608539 | Sakamoto | Mar 1997 | A |
5682249 | Harrington | Oct 1997 | A |
6377266 | Baldwin | Apr 2002 | B1 |
6628294 | Sadowsky | Sep 2003 | B1 |
6677952 | Baldwin | Jan 2004 | B1 |
8345762 | Vieron | Jan 2013 | B2 |
8560753 | Hobbs | Oct 2013 | B1 |
9635357 | Zhang | Apr 2017 | B2 |
9654360 | Kellicker | May 2017 | B1 |
10984758 | Croxford | Apr 2021 | B1 |
20020041632 | Sato | Apr 2002 | A1 |
20020051583 | Brown | May 2002 | A1 |
20020118759 | Enficiaud | Aug 2002 | A1 |
20040148551 | Kawahara | Jul 2004 | A1 |
20060170689 | Maier | Aug 2006 | A1 |
20060188024 | Suzuki | Aug 2006 | A1 |
20060265601 | Zhu | Nov 2006 | A1 |
20070046698 | Nam | Mar 2007 | A1 |
20070058875 | Tabata | Mar 2007 | A1 |
20070086519 | Lim | Apr 2007 | A1 |
20080192052 | Ljung | Aug 2008 | A1 |
20080267291 | Vieron | Oct 2008 | A1 |
20090003795 | Yashima | Jan 2009 | A1 |
20090153493 | Mizutani | Jun 2009 | A1 |
20100034269 | Vieron | Feb 2010 | A1 |
20100079609 | Hwang | Apr 2010 | A1 |
20100097379 | Choi | Apr 2010 | A1 |
20110193978 | Wu | Aug 2011 | A1 |
20130120388 | O'Donnell | May 2013 | A1 |
20140086479 | Luo | Mar 2014 | A1 |
20140355664 | Hardy | Dec 2014 | A1 |
20170223349 | Cheng | Aug 2017 | A1 |
20180033405 | Tall et al. | Feb 2018 | A1 |
20180136720 | Spitzer | May 2018 | A1 |
20180160113 | Jeong | Jun 2018 | A1 |
20180192076 | Ikai | Jul 2018 | A1 |
20190043167 | Steyskal | Feb 2019 | A1 |
20190273910 | Malaika | Sep 2019 | A1 |
20200260063 | Hannuksela | Aug 2020 | A1 |
20200374552 | Skupin | Nov 2020 | A1 |
20210281852 | Alshina | Sep 2021 | A1 |
20210329265 | Wang | Oct 2021 | A1 |
20210368185 | Zhang | Nov 2021 | A1 |
20210392349 | Chang | Dec 2021 | A1 |
20210409737 | Gao | Dec 2021 | A1 |
20210409738 | Gao | Dec 2021 | A1 |
20220046270 | Kim | Feb 2022 | A1 |
20220109865 | Deshpande | Apr 2022 | A1 |
20220141475 | Herrou | May 2022 | A1 |
20220150529 | Lim | May 2022 | A1 |
20220150544 | Deng | May 2022 | A1 |
20220201307 | Yea | Jun 2022 | A1 |
20220256148 | Zhang | Aug 2022 | A1 |
Number | Date | Country |
---|---|---|
2020033875 | Feb 2020 | WO |
Entry |
---|
European Patent Office, Extended European Search Report, Application No. 22170252.5, dated Aug. 10, 2022, 12 pages. |
Number | Date | Country | |
---|---|---|---|
20220377372 A1 | Nov 2022 | US |