The disclosure relates to data transmission, and more particularly to a display driver integrated circuit, an image processor, and an operation method thereof.
Foveated rendering is a new image computing technology to reduce data amount of image detail which will not be noticed by user. When the human eye sees something, not the entire field of view is clear, but the center point is clear, and the field of view becomes blurrier closer to the sides. Therefore, in displaying images on a screen, only a foveated area of a screen where the human eye is foveating, usually a center area of the screen or a dynamically changing area determined based on an eye-tracking signal, is necessary to have the highest image resolution and the resolution in the area outside of the foveated area can be reduced. Therefore, the amount of transmitted data between the screen and a host device providing image data and the computational load can be greatly reduced by reducing the resolution of images around the foveated point. The foveated rendering technology is mainly applied in an augmented reality (AR) device and a virtual reality (VR) device that integrates an eye tracking technology.
After receiving the downscaled image LD1 and the cropped image HD1, the display driver integrated circuit 120 may scale up the downscaled image LD1, that is, increase the resolution of the downscaled image LD1, to generate an upscaled image IMG12. The display driver integrated circuit 120 may merge or blend the cropped image HD1 into a foveated area F12 of the upscaled image IMG12 to generate an output image to be displayed.
The application processor 110 issues an image write command to the display driver integrated circuit 120 to transmit the cropped image HD1 and the downscaled image LD1 to the display driver integrated circuit 120. Generally speaking, the application processor 110 completely transmits an image to the display driver integrated circuit 120 before starting to transmit another image. For example, the application processor 110 first completely transmits the cropped image HD1 to the display driver integrated circuit 120, and then starts to transmit the downscaled image LD1 to the display driver integrated circuit 120. Each image includes multiple lines, and each line includes multiple pixel data. After receiving at least one line of the downscaled image LD1, the display driver integrated circuit 120 may scale up the at least one line of the downscaled image LD1 to generate multiple corresponding lines of the upscaled
After the application processor 110 issues the image write command, the display driver integrated circuit 120 can only start to output the output image to be displayed to a display panel (not shown) after a period of time as a latency. The latency includes at least a transmission time of the image write command, a transmission time of the complete cropped image HD1, a transmission time of a first line of the downscaled image LD1, and an upscaled computing time for processing the first line of the downscaled image LD1. The latency becomes longer as the size of the cropped image HD1 increases. In a case where the sizes of the cropped image HD1 and the downscaled image LD1 do not change, in order to shorten the latency, the transmission speed of a transmission interface between the application processor 110 and the display driver integrated circuit 120 needs to be increased.
Although the foveated rendering technology can reduce the amount of transmitted data, the latency may not meet the application requirements of an AR product or a VR product. If the latency is to be reduced to meet the application requirements of the AR (or the VR) product, the transmission speed (or the bandwidth) of the transmission interface between the application processor 110 and the display driver integrated circuit 120 needs to be greatly increased. In fact, increasing the transmission speed (or the bandwidth) of the transmission interface without limit is impractical.
It should be noted that the content of the “Description of Related Art” section is used to help understand the disclosure. Part of the content (or all of the content) disclosed in the “Description of Related Art” section may not be the conventional technology known to persons skilled in the art. The content disclosed in the “Description of Related Art” section does not represent that the content is already known to persons skilled in the art before the application of the disclosure.
The disclosure provides a display driver integrated circuit, an image processor, and an operation method thereof to effectively reduce a latency.
In an embodiment of the disclosure, the display driver integrated circuit includes a receiving circuit, a memory unit, and a foveated rendering circuit. The receiving circuit is configured to receive a first image and a second image from an image providing circuit. The memory unit is configured to store the first image and the second image. The foveated rendering circuit is coupled to the memory unit. The foveated rendering circuit is configured to generate an output image to be displayed by performing image processing based on the first image and the second image. The first image is with respect to a foveated area of the output image. The receiving circuit receives at least a part of one of the first image and the second image before other one of the first image and the second image is completely received.
In an embodiment of the disclosure, the operation method of the display driver integrated circuit includes the following steps. A first image and a second image are received from an image providing circuit by a receiving circuit of the display driver integrated circuit. The receiving circuit receives at least a part of one of the first image and the second image before other one of the first image and the second image is completely received. The first image and the second image are stored to a memory unit. Image processing is performed based on the first image and the second image to generate an output image to be displayed. The first image is with respect to a foveated area of the output image.
In an embodiment of the disclosure, the image processor includes a digital signal processing circuit, a memory unit, and a transmitting circuit. The digital signal processing circuit is configured to generate a first image and a second image based on an original image. The first image is a cropped image with respect to a foveated area of the original image, and the second image is a downscaled image through scaling down the original image. The memory unit is coupled to the digital signal processing circuit. The memory unit is configured to store the first image and the second image. The transmitting circuit is coupled to the memory unit. The transmitting circuit is configured to transmit the first image and the second image to a display driver integrated circuit. The transmitting circuit transmits at least a part of one of the first image and the second image before other one of the first image and the second image is completely transmitted.
In an embodiment of the disclosure, the operation method of the image processor includes the following steps. A first image and a second image are generated based on an original image by a digital signal processing circuit. The first image is a cropped image with respect to a foveated area of the original image, and the second image is a downscaled image through scaling down the original image. The first image and the second image are stored by a memory unit. The first image and the second image are transmitted to a display driver integrated circuit by a transmitting circuit of the image processor. The transmitting circuit transmits at least a part of one of the first image and the second image before other one of the first image and the second image is completely transmitted.
Based on the above, the image providing circuit (for example, the image processor) according to the embodiments of the disclosure may generate the cropped image (the first image) and the downscaled image (the second image) based on the original image. The image processor may first transmit at least a part of one of the cropped image and the downscaled image to the display driver integrated circuit before the other one of the cropped image and the downscaled image is completely transmitted to the display driver integrated circuit. Therefore, the latency can be effectively reduced.
In order for the features and advantages of the disclosure to be more comprehensible, specific embodiments are described in detail below in conjunction with the accompanying drawings.
The term “coupling (or connection)” used in the entire specification (including the claims) of the present application may refer to any direct or indirect connection means. For example, if a first device is described as being coupled (or connected) to a second device, it should be interpreted that the first device may be directly connected to the second device or the first device may be indirectly connected to the second device through another device or certain connection means. Terms such as “first” and “second” mentioned in the entire specification (including the claims) of the present application are used to name the elements or to distinguish between different embodiments or ranges, but not to limit the upper limit or the lower limit of the quantity of elements or to limit the sequence of the elements. In addition, wherever possible, elements/components/steps using the same reference numerals in the drawings and embodiments represent the same or similar parts. Related descriptions of the elements/components/steps using the same reference numerals or using the same terminologies may be cross-referenced.
A data transmission method applied to a foveated rendering technology will be described below with some embodiments. In a case of limited interface bandwidth, the following embodiments can effectively reduce a latency.
In terms of the form of hardware, the digital signal processing circuit 211 and/or the transmitting circuit 213 may be implemented as logic circuits on an integrated circuit. Related functions of the digital signal processing circuit 211 and/or the transmitting circuit 213 may be implemented in hardware using hardware description languages (for example, Verilog HDL or VHDL) or other suitable programming languages. For example, the related functions of the digital signal processing circuit 211 and/or the transmitting circuit 213 may be implemented in one or more controllers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), and/or various logic blocks, modules, and circuits in other processing units.
In terms of the form of software and/or firmware, the related functions of the digital signal processing circuit 211 and/or the transmitting circuit 213 may be implemented as programming codes. For example, the digital signal processing circuit 211 and/or the transmitting circuit 213 are implemented using general programming languages (for example, C, C++, or assembly language) or other suitable programming languages. The programming codes may be recorded/stored in a “non-transitory readable medium”. For example, a disk, a card, a semiconductor memory, a programmable logic circuit, etc. may be used to implement the non-transitory readable medium. A central processing unit (CPU), a controller, a microcontroller, or a microprocessor may read and execute the programming codes from the recording medium, thereby implementing the related functions of the digital signal processing circuit 211 and/or the transmitting circuit 213.
The digital signal processing circuit 211 generates a first image and a second image based on an original image IMG21, wherein the first image is a cropped image HD2 with respect to a foveated area of the original image IMG21, and the second image is a downscaled image LD2 through scaling down the original image IMG21. The digital signal processing circuit 211 may define a foveated area in the original image IMG21. The digital signal processing circuit 211 may generated the cropped image HD2 by cropping out the foveated area of the original image IMG21. The digital signal processing circuit 211 may also scale down the original image IMG21 to generate a downscaled image LD2. The transmitting circuit 213 may transmit the downscaled image LD2 and the cropped image HD2 to the display driver integrated circuit 220. After the receiving circuit 221 of the display driver integrated circuit 220 receives and stores the downscaled image LD2 and the cropped image HD2, the foveated rendering circuit 223 may scale up the downscaled image LD2 to generate an upscaled image. The display driver integrated circuit 220 may merge or blend the cropped image HD1 into a foveated area of the upscaled image to generate an output image IMGout to be displayed.
In addition, in Step S310, the digital signal processing circuit 211 may also reduce the amount of data of the original image IMG21, that is, reduce the resolution of the original image IMG21 to generate the downscaled image LD2 (the second image). The resolution of the downscaled image LD2 may be determined according to the actual design. For example, but not limited to, the resolution (a.k.a. second resolution) of the downscaled image LD2 (the second image) may be the same as the resolution (a.k.a. first resolution) of the cropped image HD2 (the first image). In other words, the size of the downscaled image LD2 is the same as the size of the cropped image HD2.
The shown memory unit 212 is coupled to the digital signal processing circuit 211 to store the first image and the second image (Step S320). The transmitting circuit 213 is coupled to a receiving circuit 221 of the display driver integrated circuit 220. According to the actual implementation, the connection between the transmitting circuit 213 and the receiving circuit 221 may include a mobile industry processor interface (MIPI), a DisplayPort (DP) interface, an embedded DP (embedded DP, eDP) interface, or other transmission interfaces. In Step S330, the transmitting circuit 213 may transmit the first image (the cropped image HD2) and the second image (the downscaled image LD2) to the display driver integrated circuit 220. Amount of data transmitted can be significantly reduced by transmitting the second image and image detail displayed can be maintained as possible by transmitting the first image to the display driver integrated circuit 220.
In the embodiment shown in
In other words, the transmitting circuit 213 respectively transmits multiple first partial images of the cropped image HD2 (the first image) in multiple first transmitting time units, and respectively transmits multiple second partial images of the downscaled image LD2 (the second image) in multiple second transmitting time units. The first transmitting time units and the second transmitting time units are alternately arranged. Each of the first transmitting time units is long enough to transmit at least one line of the cropped image HD2 (the first image), and each of the second transmitting time units is long enough to transmit at least one line of the downscaled image LD2 (the second image). Therefore, the read cropped image HD2 and downscaled image LD2 may be transmitted to the display driver integrated circuit 220 in time division.
The transmitting circuit 213 may alternately read the first partial images of the cropped image HD2 and the second partial images of the downscaled image LD2 from the memory unit 212 in time division. T4_1, T4_2, T4_3, T4_4, ..., T4_n-1, and T4_n shown in
Therefore, the read cropped image HD2 and downscaled image LD2 may be transmitted to the display driver integrated circuit 220 in time division. After the transmission time unit T4_2, the display driver integrated circuit 220 may immediately scale up the first line of the downscaled image LD2 to generate multiple corresponding lines of the upscaled image, thereby starting to output a part of an output image IMGout to be displayed to a display panel (not shown). Therefore, before one of the cropped image HD2 and the downscaled image LD2 is completely transmitted to the display driver integrated circuit 220, the image processor 210 may first transmit at least a part of the other one of the cropped image HD2 and the downscaled image LD2 to the display driver integrated circuit 220. Based on this, the latency can be effectively reduced.
The transmitting circuit 213 may alternately read the first partial images of the cropped image HD2 and the second partial images of the downscaled image LD2 from the memory unit 212 in time division. T5_1, T5_2, . . . , and T5_n shown in
Therefore, the read cropped image HD2 and downscaled image LD2 may be transmitted to the display driver integrated circuit 220 in time division. After the transmission time unit T5_1, the display driver integrated circuit 220 may immediately scale up the first line of the downscaled image LD2 to generate the corresponding lines of the upscaled image, thereby starting to output the output image IMGout to be displayed to the display panel (not shown). Therefore, before one of the cropped image HD2 and the downscaled image LD2 is completely transmitted to the display driver integrated circuit 220, the image processor 210 may first transmit at least a part of the other one of the cropped image HD2 and the downscaled image LD2 to the display driver integrated circuit 220. Based on this, the latency can be effectively reduced.
In the embodiment shown in
In the form of hardware, the receiving circuit 221 and/or the foveated rendering circuit 223 may be implemented as logic circuits on an integrated circuit. Related functions of the receiving circuit 221 and/or the foveated rendering circuit 223 may be implemented in hardware using hardware description languages (for example, Verilog HDL or VHDL) or other suitable programming languages. For example, the related functions of the receiving circuit 221 and/or the foveated rendering circuit 223 may be implemented in one or more controllers, microcontrollers, microprocessors, application-specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), and/or various logic blocks, modules, and circuits in other processing units.
In the form of software and/or firmware, the related functions of the receiving circuit 221 and/or the foveated rendering circuit 223 may be implemented as programming codes. For example, the receiving circuit 221 and/or the foveated rendering circuit 223 are implemented using general programming languages (for example, C, C++, or assembly language) or other suitable programming languages. The programming codes may be recorded/stored in a “non-transitory readable medium”. For example, a disk, a card, a semiconductor memory, a programmable logic circuit, etc. may be used to implement the non-transitory readable medium. A central processing unit (CPU), a controller, a microcontroller, or a microprocessor may read and execute the programming codes from the recording medium, thereby implementing the related functions of the receiving circuit 221 and/or the foveated rendering circuit 223.
The receiving circuit 221 may receive the first image (the cropped image HD2) and the second image (the downscaled image LD2) from the image providing circuit (for example, the image processor 210). The receiving circuit 221 receives at least a part of one of the first image and the second image before the other one of the first image and the second image is completely received. The time sequence of the receiving circuit 221 receiving the cropped image HD2 and the downscaled image LD2 from the image processor 210 is in accordance with the transmission time sequence of the image processor 210, and for the transmission time sequence of the image processor 210, reference may be made to the related descriptions of
The receiving circuit 221 respectively receives multiple parts of the first image (the cropped image HD2), a.k.a. multiple first partial images, in multiple first receiving time units, and respectively receives multiple parts of the second image (the downscaled image LD2), a.k.a. multiple second partial images, in multiple second receiving time units, wherein the first receiving time units and the second receiving time units are alternately arranged. Each of the first receiving time units is long enough to receive at least one line of the cropped image HD2, and each of the second receiving time units is long enough to receive at least one line of the downscaled image LD2. The memory unit 222 is coupled to the receiving circuit 221 to store the first image (the cropped image HD2) and the second image (the downscaled image LD2) (Step S620). According to the actual implementation, in some examples, the resolution (a.k.a. first resolution) of the cropped image HD2 is the same as the resolution (a.k.a. second resolution) of the downscaled image LD2. In other examples, the resolution of the cropped image HD2 may be different from the resolution of the downscaled image LD2.
The foveated rendering circuit 223 is coupled to the memory unit 222. In Step S630, the foveated rendering circuit 223 generates the output image IMGout to be displayed by performing image processing based on the first image (the cropped image HD2) and the second image (the downscaled image LD2), wherein the first image is with respect to a foveated area of the output image IMGout. The resolution of the cropped image HD2 is different from the resolution (a.k.a. third resolution) of the output image IMGout.
Image processing performed by the foveated rendering circuit 223 based on the first image (the cropped image HD2) and the second image (the downscaled image LD2) is described below. First, the foveated rendering circuit 223 may scale up the downscaled image LD2 to generate an upscaled image. The foveated rendering circuit 223 may blend the cropped image HD2 and the upscaled image, and the cropped image HD2 is blended into a foveated area of the upscaled image to generate the output image IMGout. The resolution of the upscaled image is the same as the resolution of the output image IMGout. Image data of the foveated area of the output image IMGout may be different from image data of the foveated area of the original image in the image processor side.
In the embodiment shown in
The above operations such as data upscaling, blending, and outputting the upscaled image may be performed on horizontal display lines. That is, the data upscaling/blending operation may be performed without waiting for a complete source image frame to be received. After receiving the first line of the downscaled image, the display driver integrated circuit 220 may start to generate a first line of the output image IMGout to be displayed to the display panel (not shown). Therefore, the latency can be effectively shortened.
In the embodiment shown in
The foveated rendering circuit 223 shown in
The decoder circuits DEC81 and DEC82 are coupled to the memory unit 222. In the embodiment shown in
In summary, the image providing circuit (for example, the image processor) according to the embodiments of the disclosure may generate the cropped image (the first image) and the downscaled image (the second image) based on the original image. The image processor may first transmit at least a part of one of the cropped image and the downscaled image to the display driver integrated circuit before the other one of the cropped image and the downscaled image is completely transmitted to the display driver integrated circuit. Therefore, the latency can be effectively reduced.
Although the disclosure has been disclosed in the above embodiments, the embodiments are not intended to limit the disclosure. Persons skilled in the art may make some changes and modifications without departing from the spirit and scope of the disclosure. The protection scope of the disclosure shall be defined by the appended claims.
This application claims the priority benefit of U.S. Provisional Application No. 63/151,808, filed on Feb. 22, 2021. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
Number | Date | Country | |
---|---|---|---|
63151808 | Feb 2021 | US |