DISPLAY DEVICE AND CONTROL METHOD THEREOF

Information

  • Patent Application
  • 20210012754
  • Publication Number
    20210012754
  • Date Filed
    January 22, 2020
    4 years ago
  • Date Published
    January 14, 2021
    3 years ago
Abstract
A display device includes a display; and a processor configured to control the display to display an output image including video content and a plurality of graphic objects, wherein the processor is configured to: obtain a video image by processing input video content, obtain a plurality of graphic images including each of the plurality of graphic objects by processing each of the plurality of graphic objects in parallel, and obtain the output image by mixing the video image and the plurality of graphic images.
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application is based on and claims priority under 35 U.S.C. § 119 to Korean Patent Application Nos. 10-2019-0082029, filed on Jul. 8, 2019, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.


BACKGROUND
Field

Apparatuses and methods consistent with the disclosure relate to a display device and a control method thereof, and more particularly, to a display device for providing a graphic image of a high resolution and a control method thereof.


Description of the Related Art

In accordance with the development of electronic technology, various types of electronic devices have been developed and spread. In particular, display devices used in various places such as homes, offices, public places, and the like have been continuously developed in recent years.


In recent years, the demand for high resolution image services has increased greatly. Due to such a demand, the resolution of the display device is increasing. However, in order to match the current performance of a graphics processing unit (GPU), a graphic image is processed at the resolution lower than that of the display device, a size of the graphic image is up-scaled, and the graphic image is mixed with a video content. As a result, there is a problem that a resolution deterioration and an image quality deterioration of the graphic image occur.


SUMMARY OF THE INVENTION

Embodiments of the disclosure overcome the above disadvantages and other disadvantages not described above. Also, the disclosure is not required to overcome the disadvantages described above, and an embodiment of the disclosure may not overcome any of the problems described above.


According to an embodiment of the disclosure, a display device includes a display; and a processor configured to control the display to display an output image including video content and a plurality of graphic objects, wherein the processor is configured to: obtain a video image by processing input video content, obtain a plurality of graphic images including each of the plurality of graphic objects by processing each of the plurality of graphic objects in parallel, and obtain the output image by mixing the obtained video image and the obtained plurality of graphic images.


According to another embodiment of the disclosure, a control method of a display device includes obtaining a video image by processing input video content; obtaining a plurality of graphic images including each of a plurality of graphic objects by processing each of the plurality of graphic objects in parallel; and displaying an output image including the video content and the plurality of graphic objects based on the obtained video image and the obtained plurality of graphic images.


According to another embodiment of the disclosure, a non-transitory computer-readable medium for storing computer instructions that when executed by a processor of a display device, cause the display device to perform operations is provided, wherein the operations include obtaining a video image by processing input video content, obtaining a plurality of graphic images including each of a plurality of graphic objects by processing each of the plurality of graphic objects in parallel; and displaying an output image including the obtained video content and the obtained plurality of graphic objects based on the video image and the plurality of graphic images.





BRIEF DESCRIPTION OF THE DRAWING FIGURES

The above and/or other aspects of the disclosure will be more apparent by describing certain embodiments of the present disclosure with reference to the accompanying drawings, in which:



FIGS. 1A and 1B are diagrams illustrating implementations of a display device according to an embodiment of the disclosure.



FIGS. 2A and 2B are block diagrams illustrating configurations of the display device according to an embodiment of the disclosure.



FIGS. 3A and 3B are diagrams illustrating implementations of a processor according to diverse embodiments of the disclosure.



FIG. 4 is a diagram illustrating implementation of a processor according to another embodiment of the disclosure.



FIGS. 5A and 5B are diagrams illustrating a processing sequence according to diverse embodiments of the disclosure.



FIGS. 6A, 6B, 6C and 6D are diagrams illustrating a processing sequence according to diverse embodiments of the disclosure.



FIG. 7 is a diagram illustrating one implementation of a display device according to another embodiment of the disclosure.



FIG. 8 is a flowchart illustrating a control method of a display device according to an embodiment of the disclosure.





DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS

The disclosure provides a display device for providing a graphic image displayed together with a high resolution video content at high resolution, and a control method thereof.


Hereinafter, the disclosure will be described in detail with reference to the accompanying drawings.


After terms used in the specification are briefly described, the disclosure will be described in detail.


General terms that are currently widely used were selected as terms used in embodiments of the disclosure in consideration of functions in the disclosure, but may be changed depending on the intention of those skilled in the art or a judicial precedent, an emergence of a new technique, and the like. In addition, in a specific case, terms arbitrarily chosen by an applicant may exist. In this case, the meaning of such terms will be mentioned in detail in a corresponding description portion of the disclosure. Therefore, the terms used in the disclosure should be defined on the basis of the meaning of the terms and the contents throughout the disclosure rather than simple names of the terms.


Terms ‘first’, ‘second’, and the like may be used to describe various components, but the components should not be limited by the terms. The terms are used only to distinguish one component from another component.


Singular forms are intended to include plural forms unless the context clearly indicates otherwise. It should be further understood that terms “include” or “constituted” used in the application specify the presence of features, numerals, steps, operations, components, parts mentioned in the specification, or combinations thereof, but do not preclude the presence or addition of one or more other features, numerals, steps, operations, components, parts, or combinations thereof.


The expression “at least one of A or B” should be understood to represent either “A” or “B” or any one of “A and B”.


In the disclosure, a ‘module’ or a ‘˜er/˜or’ may perform at least one function or operation, and be implemented by hardware or software or be implemented by a combination of hardware and software. In addition, a plurality of ‘modules’ or a plurality of ‘˜ers/ors’ may be integrated in at least one module and be implemented as at least one processor (not illustrated) except for a ‘module’ or an ‘˜er/or’ that needs to be implemented by specific hardware.


Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art to which the disclosure pertains may easily practice the disclosure. However, the disclosure may be implemented in various different forms and is not limited to the embodiments described herein. In addition, in the drawings, portions unrelated to the description will be omitted to obviously describe the disclosure, and similar portions will be denoted by similar reference numerals throughout the specification.



FIGS. 1A and 1B are diagrams illustrating implementations of a display device according to an embodiment of the disclosure.


A display device 100 may be implemented as a television (TV), but is not limited thereto and may be applied without limitation as long as it has a display function such as a video wall, a large format display (LFD), a digital signage, a digital information display (DID), a projector display, or the like.


The display device 100 may receive various compressed images and/or images of various resolutions. For example, the display device 100 may receive an image in a compressed form in a moving picture expert group (MPEG) (e.g., MP2, MP4, MP7, etc.), a joint photographic coding experts group (JPEG), an advanced video coding (AVC), H.264, H.265, a high efficiency video codec (HEVC), and the like. Alternatively, the display device 100 may receive any one of a standard definition (SD) image, a high definition (HD) image, a full HD image, and an ultra HD image.


In addition, the display device 100 may perform various image processing based on characteristics of the received image. Here, the image may be a digital image. The digital image refers to an image obtained by transforming an analog image through a process of sampling, quantization, and coding. The digital image may be distinguished by arranging each pixel in a two-dimensional form using coordinates or matrix positions. The image processing may be a digital image processing including at least one of image enhancement, image restoration, image transformation, image analysis, image understanding, or image compression.


According to one example, the display device 100 may be implemented in the form of a video wall as illustrated in FIG. 1A. For example, the video wall may be implemented in the form of physically connecting a plurality of display modules. Here, each display module may include a self-light emitting element including at least one of a light emitting diode (LED), a micro LED, or a mini LED. For example, the display module may be implemented as an LED module in which each of the plurality of pixels is implemented as an LED pixel, or an LED cabinet to which a plurality of LED modules are connected, but is not limited thereto.


In this case, resolution of the display device 100 may be changed according to a number of the plurality of display modules constituting the display device 100. As an example, in a case in which each of the display modules has the resolution of 1K, when the display modules are connected in the form of 8×8, a high resolution display having the resolution of 8K may be implemented. However, the display is not limited thereto and may support various resolutions or aspect ratios such as 7×7, 6×6, and 8×4 by assembling the display modules according to a user's request. In this case, one display module may have various sizes such as 960×540 (1K), 480×270 (0.5K), 640×480, and the like. For example, the video wall is a structure in which the plurality of display modules are combined to form a large display such as walls of various sizes, and may be implemented with a bezel minimized or absent to minimize discontinuity at a point connected between modules.


According to an embodiment, only a portion of the entire region of the video wall may be used for playing the video content, and a background image may be provided to the remaining region. As an example, the position and/or size of the region used for playing the content may be changed based on a user selection command, a type of image content, and the like. For example, the display device 100 may detect a user's position, a viewing distance, and the like by using a sensor to provide the video content to a region of an appropriate size at an appropriate position.


In the case in which the display device 100 is implemented as the video wall, the display device 100 may output content 10 including a video region 11 including the video content and a background region 12 including the background image. However, in a case in which the graphic image provided as the background image has high resolution (e.g., 8K), there is a problem that a current performance of the GPU is difficult to maintain and output the high resolution, and in this case, there is a problem that a resolution deterioration and/or an image quality deterioration of the background region occur.


Accordingly, hereinafter, diverse embodiments in which a high resolution background image may be processed without loss of resolution and provided will be described.



FIGS. 2A and 2B are block diagrams illustrating configurations of the display device according to an embodiment of the disclosure.


Referring to FIG. 2A, the display device 100 includes a display 110 and a processor 120.


The display 110 may be implemented as a display including a self-light emitting element, and a display including a non-light emitting element and a backlight. For example, the display 110 may be implemented as various forms of displays such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, a light-emitting diode (LED), a micro LED, a mini LED, a plasma display panel (PDP), a quantum dot (QD) display, a quantum dot light-emitting diodes (QLED), and the like. The display 110 may also include a driving circuit, a backlight unit, and the like which may be implemented in the form of an a-si thin film transistor (TFT), a low temperature poly silicon (LTPS) TFT, and an organic TFT (OTFT). Meanwhile, the display 110 may be implemented as a touch screen combined with a touch sensor, a flexible display, a rollable display, a three-dimensional (3D) display, a display in which a plurality of display modules are physically connected to each other, and the like.


The processor 120 controls an overall operation of the display device 100.


According to an embodiment, the processor 120 may be implemented as a digital signal processor (DSP), a microprocessor, a graphics processing unit (GPU), an artificial intelligence (AI) processor, or a time controller (TCON) for processing a digital image signal. However, the processor 120 is not limited thereto, but may include one or more of a central processing unit (CPU), a micro controller unit (MCU), a micro processing unit (MPU), a controller, an application main processor (AP), a communication processor (CP), and an ARM processor, or may be defined as the corresponding term. In addition, the processor 120 may also be implemented as a system-on-chip (SoC) or a large scale integration (LSI) in which a processing algorithm is embedded, and may also be implemented in the form of a field programmable gate array (FPGA).


The processor 120 performs image processing on an input image to obtain an output image. Here, the input image and the output image may be images having standard definition (SD) high definition (HD), full HD, ultra high definition (UHD), or 8K (7680×4320) UHD or higher resolution (e.g., 16K, 32K), but are not limited thereto.


The processor 120 may control the display 110 to display an output image including a video image and a plurality of graphic objects.


To this end, the processor 120 may process input video content (or a video signal) to obtain a video image in which video content is included in one region. Specifically, the processor 120 may process in real time video content (or a video signal) input in real time to obtain a video image in which video content is included in one region. For example, assuming that the output image is an 8K image, a video image of 8K size in which the video content is included one region and the remaining region has a predetermined pixel value may be obtained. Here, the predetermined pixel value may be, for example, 0, but is not limited thereto. For example, a video image of 8K size in which the video content is included one region and the remaining region does not have a predetermined pixel value may also be obtained.


In addition, the processor 120 may process each of the plurality of graphic objects in parallel to obtain a plurality of graphic images including each of the plurality of graphic objects. For example, the processor 120 may render each of the plurality of graphic objects in parallel or decode a compressed image including each of the plurality of graphic objects in parallel to obtain the plurality of graphic images.


In this case, the processor 120 may obtain the output image by alpha blending the video image and the plurality of graphic images based on an alpha value corresponding to at least one of the plurality of graphic images. Here, the alpha blending refers to a method of displaying a mixture of background RGB values and RGB values thereabove by assigning a new value A (Alpha) to color values RGB to create a transparent effect when overlapping another image on the image. For example, the alpha values are divided into values of 0 to 255 or values of 0.0 to 1.0. 0 may mean completely transparent, and 255 (or the highest value, such as 1.0), which is the reverse thereof, may mean fully opaque. Alternatively, 0 may mean fully opaque, and 255 (or the highest value, such as 1.0), which is the reverse thereof, may mean completely transparent. For example, in a case in which values from 0 to 255 may be represented by assigning 8 bits to the alpha values, the larger the corresponding value, the higher the ratio of the corresponding pixel, and the lower the corresponding value, the lower the ratio of the corresponding pixel. According to one example, in a case in which the video V and a graphic G are mixed, the mixing operation may be represented by Mathematic expression such as V*Alpha+G*(1−Alpha) or V*(1−Alpha)+G*Alpha or V*Alpha+G.


According to an embodiment, the output image may be an image in which the video content is included in some region (hereinafter referred to as a first region) and at least one graphic object is included in the remaining region (hereinafter referred to as a second region). For example, assuming that the output image is an image of 8K, as illustrated in FIG. 1B, the output image may be an image in which the video content is played in some region of the image of 8K and the background image is included in the remaining region.


According to one example, each of the first region and the second region may include a graphic object having a first resolution. Here, the first resolution may be a same resolution as the output image (or the same resolution as that of the display 110), but is not limited thereto. For example, in a case in which the output image is an image of 8K, the first resolution may be 8K resolution. In this case, the processor 120 may obtain the output image by alpha blending a first graphic image including the first graphic object, a second graphic image including the second graphic object, and the video image based on the alpha value corresponding to at least one of the first or second image.


According to another example, the graphic object having the first resolution may be included in the second region, and a graphic object having a second resolution may be included in at least one of the first region or the second region. Here, the second resolution may be a resolution less than the first resolution.


In this case, the processor 120 may obtain the output image by up-scaling an image including the graphic object having the second resolution (hereinafter, referred to as a graphic image having a second resolution) and an alpha value corresponding to the graphic image having the second resolution, and alpha blending a graphic image having a first resolution, the video image, and the up-scaled graphic image based on an image including the graphic object having the first resolution (referred to as a graphic image having a first resolution).


As illustrated in FIG. 2B, a display device 100′ may further include a memory 130.


The memory 130 may store data necessary for diverse embodiments of the disclosure. The memory 130 may be implemented in the form of a memory embedded in the display device 100 or may also be implemented in the form of a memory attachable to and detachable from the display device 100′, depending on a data storing purpose. For example, data for driving the display device 100′ may be stored in the memory embedded in the display device 100′, and data for an extension function of the display device 100′ may be stored in the memory attachable to and detachable from the display device 100′. Meanwhile, the member embedded in the display device 100′ may be implemented as at least one of a volatile memory (e.g., a dynamic random access memory (DRAM), a static RAM (SRAM), a synchronous dynamic RAM (SDRAM), or the like), or a non-volatile memory (e.g., a one time programmable read only memory (OTPROM), a programmable ROM (PROM), an erasable and programmable ROM (EPROM), an electrically erasable and programmable ROM (EEPROM), a mask ROM, a flash ROM, a flash memory (e.g., a NAND flash, a NOR flash, or the like), a hard drive, or a solid state drive (SSD)). In addition, the memory attachable to and detachable from the display device 100′ may be implemented in the form such as a memory card (e.g., a compact flash (CF), a secure digital (SD), a micro secure digital (Micro-SD), a mini secure digital (Mini-SD), an extreme digital (xD), a multi-media card (MMC), or the like), an external memory connectable to a USB port (e.g., a USB memory), or the like.


According to an embodiment, the memory 130 may store an image received from an external device (e.g., a source device), an external storage medium (e.g., USB), an external server (e.g., web hard), or the like. Here, the image may be a digital video, but is not limited thereto.


According to an embodiment, the processor 120 may write the graphic image having the first resolution to the memory 130, read the graphic image, and mix the graphic image with the video image. In this case, at a frequency lower than an output frequency (or a frame rate) of the display 110, the processor 120 may write the graphic image having the first resolution to the memory 130, read the graphic image having the first resolution, mix the graphic image having the first resolution with the video image, and then output the mixed image. For example, the processor 120 may read the graphic image having the first resolution from the memory 130 at the same output frequency as the output frequency of the display 110, that is, an output frequency of the video image. Accordingly, the video image and the graphic image having the first resolution may be mixed and output at the same output frequency.


For example, in a case in which the display 110 is a display having 8K resolution implemented with 7680*4320 pixels, and the processor 120 outputs a video image having 8K resolution to the display 110 at 60 Hz, even when the performance of the GPU is low, rendering at a lower frequency allows the graphic image having 8K resolution to be rendered. For example, the processor 120 may render the graphic image having 8K resolution at 15 Hz and write the graphic image to the memory. Subsequently, the processor 120 may repeatedly read and output the graphic image having 8K resolution stored in the memory 130 at an output frequency of the display 110, that is, 60 Hz. That is, in a case in which the graphic image of 8K resolution is an image that does not change or has little movement for a predetermined time, the processor 120 may store the image in the memory 130 at 8K resolution and then repeatedly read and output the stored image to be synchronized with the output frequency of the display 110. Accordingly, even in a case in which the performance of the GPU of the processor 120 is somewhat low, the resolution of the high resolution graphic image may be maintained.


According to one example, the processor 120 may mix the graphic image and the video image having the first resolution using the alpha blending.


For example, assume that the background image provided in the second region is a graphic image having 8K resolution (hereinafter, referred to as an 8K graphic image). In this case, the processor 120 may obtain the output image including the video content and the graphic image based on the 8K video image in which the video content is included in one region, the 8K graphic image, an alpha blending image (or an alpha map) corresponding to the 8K graphic image.


Here, the alpha blending image may be a map (e.g., 8K size map) in which an alpha value corresponding to a pixel position of the first region is smaller than the threshold value, and an alpha value corresponding to a pixel position of the second region is greater than the threshold value. For example, in a case in which the alpha value corresponding to the first region is 0 and the alpha value corresponding to the background region is 255, through the alpha blending, transparency of the 8K graphic image is large in the first region and the transparency of the 8K graphic image is reduced in the second region. Accordingly, the content in which the video image is included in the first region and the 8K graphic image is included in the second region may be obtained.


According to another example, the processor 120 may mix the 8K graphic image and the video image based on position information on the first region. For example, the processor 120 may obtain pixel coordinate information of the first region at the time of processing the video image. Accordingly, the processor 120 may mix the 8K graphic image and the video image with each other by providing the scaled video content to the corresponding region and providing the 8K graphic image to the remaining region, based on the obtained pixel coordinate information.


On the other hand, according to another embodiment, the processor 120 may also obtain the output image by mixing the graphic image having the first resolution, the graphic image having the second resolution less than the first resolution, and the video image having the first resolution. In this case, the processor 120 may upscale the graphic image having the second resolution to the first resolution, read the graphic image having the first resolution from the memory 130, and mix the graphic image having the first resolution with the video image and the up-scaled graphic image. Here, the up-scaled graphic image may be mixed into at least one of the video image region or the background image region. For example, the processor 120 may upscale a graphic image having 2K or 4K resolution to the 8K resolution and mix the up-scaled graphic image with the 8K graphic image and the 8K video image.


For example, the processor 120 may render the graphic image having 2K or 4K resolution and write the rendered graphic image to the memory 130, and may upscale the graphic image of 2K or 4K resolution stored in the memory 130 to an 8K size. In this case, the processor 120 may render the graphic image of 2K or 4K resolution at the same frequency as the output frequency of the display 110. Here, it was assumed that the performance of the GPU of the processor 120 is insufficient to render 8K resolution at the same frequency as the output frequency of the display 110 as described above, but is sufficient to render the graphic image having 2K or 4K resolution at the same frequency as the output frequency of the display 110.


For example, in a case in which the processor 120 outputs the video image to the display 110 at 60 Hz, the processor 120 may render the graphic image having 2K or 4K resolution at 60 Hz, and render the graphic image having 8K resolution at 15 Hz to be written to the memory 130.


Subsequently, the processor 120 may upscale a size of the graphic image having 2K or 4K resolution rendered at 60 Hz and stored in the memory 130, and repeatedly read and output the graphic image having 8K resolution at 60 Hz.


In this case, the processor 120 may mix the up-scaled graphic image with the video image and then mix the mixed image with the 8K graphic image. Alternatively, the processor 120 may mix the up-scaled graphic image with the 8K graphic image and then mix the mixed image with the video image. Alternatively, the processor 120 may mix the 8K graphic image and the video image, and then mix the mixed image with the up-scaled graphic image.


Here, the alpha blending may be used for mixing between the respective images. However, the disclosure is not limited thereto, and each image may be mixed using position information (or coordinate information) of the region included in each image.


In a case in which the processor 120 mixes the respective images using the alpha blending, the processor 120 may upscale an alpha value corresponding to the graphic image of 2K or 4K resolution to the 8K size, and then perform the mixing using the up-scaled alpha value. For example, the scaling of the alpha value may use at least one scaling technique (or interpolation technique) of polyphase scaling (or polyphase interpolation), trilinear scaling (or trilinear interpolation), linear scaling (or linear interpolation), or bilinear scaling (or bilinear scaling). For example, in a case in which an alpha map of 8K size is generated based on an alpha map of 4K size, the alpha value corresponding to the pixel value being added needs to be interpolated, and the alpha map of 8K size may be generated by using a variety of existing interpolation techniques.


According to one implementation, the processor 120 may include a graphics processing unit (GPU) that renders the graphic image having the first resolution at a frequency lower than the output frequency of the display 110 and stores the rendered graphic image in the memory 130, and renders the graphic image having the second resolution at the output frequency of the display 110.


As an example, the processor 120 may include a GPU configured to process the graphic image having the second resolution, a first mixer configured to firstly mix the graphic image having the second resolution processed by the GPU with the video image, a decoder configured to decode the graphic image having the first resolution and write the decoded graphic image to the memory 130, and a second mixer configured to read the graphic image having the first resolution written to the memory 130 and secondly mix the read graphic image with a first mixed image, that is, an output of the first mixer. In this case, the processor 120 may further include a video processor configured to process the video image or the first mixed image.


As another example, the processor 120 may include a decoder configured to process the graphic image having the first resolution and write the processed graphic image to the memory 130, a first mixer configured to read the graphic image having the first resolution written to the memory 130 and firstly mix the read graphic image with the video image, a GPU configured to process the graphic image having the second resolution, and a second mixer configured to secondly mix the graphic image having the second resolution processed by the GPU with a first mixed image, that is, an output of the first mixer. In this case, the processor 120 may further include a video processor configured to process the video image or the first mixed image.



FIGS. 3A and 3B are diagrams illustrating implementations of a processor according to diverse embodiments of the disclosure.


Referring to FIG. 3A, the processor 120 may include a video processor 121, a GPU 122, a decoder 125, a first scaler 123, a first mixer 124, a second scaler 126, and a second mixer 127. Here, each component may be implemented by at least one software, at least one hardware, or a combination of thereof.


According to an example, software or hardware logic corresponding to each component may be implemented in a single chip. For example, the video processor 121 may be implemented as a digital signal processor (DSP), and may be implemented in a single chip with a graphics processing unit (GPU) (or a visual processing unit (VPU)) and other components. However, the disclosure is not limited thereto, and software or hardware logics corresponding to some of the components may be implemented in a single chip, and software or hardware logics corresponding to the reminder may be implemented in another chip.



FIG. 3A is a diagram illustrating a case in which a high resolution compressed image is decoded by the decoder 125 and used as the background image according to an embodiment of the disclosure.


The video processor 121 may perform various image quality processing on input video content. Here, various image quality processing may include noise filtering, texture processing, edge processing, scaling, and the like.


According to an embodiment, the video processor 121 may output a video image 31 having a first resolution (e.g., 8K) equal to or greater than a threshold resolution. Here, the video image 31 may be an image in which a video region is included in one region and the remaining regions have a predetermined pixel value (e.g., 0).


The GPU 122 may perform a graphic processing function. For example, the GPU 122 may generate a graphic image including various objects such as icons, images, texts, and the like by using a calculator (not illustrated) and a renderer (not illustrated). Here, the calculator (not illustrated) may calculate attribute values, such as coordinate values, shapes, sizes, and colors, to which each object is to be displayed according to a layout of a screen based on a received control command. In addition, the renderer (not illustrated) may render the graphic image of various layouts including the objects on the basis of the attribute values calculated by the calculator (not illustrated).


According to an embodiment, the GPU 122 may render a graphic image having a second resolution (e.g., 2K, 4K) less than a critical resolution at an output frequency that is the same as the video signal or similar within a critical error range. Specifically, the GPU 122 may render the graphic image having the second resolution less than the critical resolution and write the rendered graphic image to the first memory 131. For example, the GPU 122 may render a 2K graphic image 32 including graphics “News” and “light” provided in the background region, and a graphic “Ch 11” provided in the video region, and write the rendered graphic image to the first memory 131. Here, the graphic image 32 may be an image having a second resolution in which regions other than the graphics (“News”, “light”, and “Ch 11”) have a predetermined pixel value (e.g., 0).


In addition, the GPU 122 may generate an alpha value corresponding to the rendered graphic image. For example, the GPU 122 may generate alpha maps corresponding to each of the graphics “News” and “light” provided in the background region, and the graphic “Ch 11” provided in the video region, and store the generated alpha maps in the first memory 131 or another memory. Here, the alpha map may be a map in which pixel regions corresponding to the graphics (“News”, “light”, and “Ch 11”) have a small alpha value and the remaining region has a large alpha value, while having a second resolution size.


In some cases, the GPU 122 may further perform a function of decoding an image. For example, the GPU 122 may decode compressed images of various formats using coding logic. However, the decoding may also be performed by an operation of a CPU (not illustrated) or the decoder 123 other than the GPU 122.


The first scaler 123 may be software or hardware that scales the graphic image rendered to the GPU 122. According to an example, the first scaler 124 may upscale the graphic image having the second resolution (e.g., 2K, 4K) to the first resolution (e.g., 8K). For example, the first scaler 123 may upscale the 4K graphic image 32 including the graphics (“News”, “light”, and “Ch 11”) written in the first memory 131 to an 8K graphic image (not illustrated).


In addition, the first scaler 123 may scale an alpha value corresponding to the graphic image rendered on the GPU 122 to correspond to the up-scaled graphic image. For example, the first scaler 123 may upscale a 2K size alpha map generated by the GPU 122 to an 8K size alpha map. The scaling of the alpha value may also be performed by a scaler separate from the first scaler 123 in some cases.


The first mixer 124 may be software or hardware that mixes the graphic image scaled by the first scaler 124 with the video image processed by the video processor 121. Specifically, the first mixer 124 may mix the graphic image scaled by the first scaler 123 and the video image processed by the video processor 121 based on the alpha value scaled by the first scaler 123. Here, the scaling of the alpha value may be performed according to at least one of polyphase scaling, trilinear scaling, linear scaling, or bilinear scaling, as described above. For example, in the scaled alpha map, in a case in which the alpha value corresponding to the graphics (“News”, “light”, and “Ch 11”) is 255, and the alpha value of the remaining regions is 0, the graphic region has low transparency and the remaining regions have high transparency such that the graphic region may be provided with the graphics (“News”, “light”, and “Ch 11”) and the remaining regions may be provided with other content to be mixed (e.g., a graphic image or a video image having a first resolution described later).


The decoder 125 may be software or hardware that decoding an image. Here, the image to be decoded may be at least one of a video image and a graphic image.


According to an embodiment, when the graphic image having the first resolution (e.g., 8K) is a compressed image, the decoder 125 may decode the graphic image and write the decoded graphic image to the second memory 132. Here, the compressed image may be various types of compressed images such as JPG, JPEG, . . . , and the like. However, when the graphic image is not the compressed image, it is also possible to bypass the decoder 125. In some cases, the GPU 122 or a CPU (not illustrated) may also be used to decode various formats of compressed image. For example, as illustrated in FIG. 3A, an 8K resolution background image 34 including a shelf image on which a picture frame, a vase, or the like is placed may be decoded by the decoder 135 and written to the second memory 132.


The second scaler 126 may be software or hardware that scales an alpha value corresponding to the graphic image having the second resolution rendered by the GPU 122. For example, an alpha blending processing is required to provide the graphic image having the second resolution rendered by the GPU 122 on the background image and/or video image decoded by the decoder 125. However, the alpha value generated by the GPU 122 corresponds to the resolution of the graphic image before the up-scaling, i.e., the first resolution (e.g., 2K or 4K), and thus it is necessary to scale the alpha value in order to obtain an alpha value corresponding to the resolution of the up-scaled graphic image, that is, the second resolution (e.g., 8K).


The second mixer 127 may be software or hardware that mixes the output of the first mixer 125 and the background image. Specifically, the second mixer 127 may read the graphic image having the second resolution written to the second memory 132 and mix the read graphic image with an output image of the first mixer 125. Here, the output image of the first mixer 125 may be an image in which the graphic image up-scaled by the first scaler 124 and the video image processed by the video processor 121 are mixed with each other. In this case, the second mixer 127 may mix the output image of the first mixer 125 and the background image 34 written to the second memory 132 with each other by using the alpha value up-scaled by the second scaler 126.


For example, the second mixer 127 may transfer sink information to specific logic disposed between the second mixer 127 and the second memory 132 based on an output frequency of a video frame transferred from the video processor 121, and receive the background image 34 read from the second memory 132 through the corresponding logic.


The second mixer 127 may mix the output image 33 of the first mixer 124 and the background image 34 based on an alpha map corresponding to the output image 33 of the first mixer 124. Accordingly, an output of the second mixer 127 may be an image 35 including the graphic information rendered by the GPU 122 and the background image 34 as illustrated in FIG. 3A.


Meanwhile, in a case in which the output of the first mixer 125 is a video image that does not include the graphic image and includes only video frames in some regions, unlike in FIG. 3A, it is also possible to mix the video frames to the corresponding region of the background image based on the position information of the video region. However, even in this case, video content may also be mixed using the alpha value.



FIG. 3B is a diagram illustrating a case in which a high resolution graphic image is rendered by the GPU 122 and used as the background image according to another embodiment of the disclosure.


Referring to FIG. 3B, the processor 120 may include the video processor 121, the GPU 122, the first scaler 123, the first mixer 124, the second scaler 126, and the second mixer 127. Although the decoder 125 is omitted in FIG. 3B for convenience of description, the decoder 125 may be required for other decoding operations such as decoding of video content. In FIG. 3B, only operations different from those illustrated in FIG. 3A will be described.


The GPU 122 may further render a graphic image having a first resolution (e.g., 8K) equal to or greater than a critical resolution. The GPU 122 may render the graphic image having the first resolution at an output frequency lower than an output frequency (or a frame rate) of a video signal, and write the rendered graphic image to the second memory 132. In a case in which the performance of the GPU 122 may render the graphic image having the second resolution (e.g., 2K or 4K) at an output frequency equal to or similar to the output frequency of the video signal, when the GPU 122 renders the graphic image having the first resolution (e.g., 8K) equal to or greater than the critical resolution, the output frequency of the GPU 122 is reduced in proportion to the rendered graphic image. For example, in order for the GPU having the performance capable of outputting a 4K graphic image at 60 Hz to render an 8K graphic image, the output frequency needs to be reduced to 15 Hz to enable the above-mentioned operation.


Accordingly, in a case in which the graphic image having the first resolution rendered by the GPU 122 is rendered at a low output frequency and is stored in the second memory 132, the output frequency may be matched with or similar to the output frequency of the video signal by repeatedly reading and outputting the stored graphic image. For example, a background image in which the same image remains for a predetermined time or more may be rendered at an output frequency lower than the output frequency of the video signal and written to the second memory 132, and repeatedly then read at an output frequency equal to or similar to the output frequency of the video signal and output. For example, as illustrated in FIG. 3B, the GPU 122 may render an 8K resolution background image 36 including a shelf image on which a picture frame, a vase, or the like is placed and store the rendered background image 36 in the second memory 132.


The second scaler 126 may upscale an alpha value corresponding to the graphic image having the second resolution rendered by the GPU 122. For example, a 2K size alpha map may be up-scaled to an 8K size alpha map.


In this case, the second mixer 127 may mix the output image of the first mixer 124 and the background image written to the second memory 132 with each other by using the alpha map up-scaled by the second scaler 126.


Although not illustrated in the drawings, the processor 120 may further include a frame rate converter. The frame rate converter may be software or hardware that changes the output frequency of the video image. For example, the frame rate converter may be disposed at a rear stage of the first mixer 124 to change the output frequency of the video image output from the first mixer 124 from 60 Hz to 120 Hz. In this case, the second mixer 127 may receive the graphic image read from the second memory 132 based on the changed output frequency of the video frame transferred from the frame rate converter.


Although FIG. 3A illustrates the embodiment in which a compressed photo image having a first resolution is decoded and used as the background image and FIG. 3B illustrates the embodiment in which a graphic image having the first resolution rendered by the GPU 122 is used as the background image, the photo image decoded by the decoder 125 and the graphic image rendered by the GPU 122 may be used together as the background image. In this case, the GPU 122 may further generate an alpha value corresponding to the rendered graphic image.


In the above-described embodiment, the first to third memories 131 to 133 may be implemented in a size suitable for performing a corresponding operation. For example, the second memory 132 may be implemented in a size suitable for storing the rendered or decoded graphic image having the first resolution.


In addition, although the first to third memories 131 to 133 have been described as separate hardware components, according to another embodiment, the first to third memories 131 to 133 may be implemented with different address regions in one memory. As such, the processing may be performed using separate memories or other regions in one memory for bandwidth efficiency of the memory. However, according to another embodiment, data may also be stored using a compression method for memory capacity and bandwidth efficiency. For example, when the data is compressed and stored at ½, the memory capacity is reduced and the bandwidth for the operation is reduced.


In addition, in the above-described embodiment, the first mixer 124 and the second mixer 127 are described as separate mixers, but may be implemented as one mixer.



FIG. 4 is a diagram illustrating an operation of the processor according to another embodiment of the disclosure.



FIG. 4 illustrates a case in which the output image does not include the graphic image having the second resolution, and includes only the graphic image having the first resolution as the background image, unlike the embodiments illustrated in FIGS. 3A and 3B. That is, the GPU 122 may not render the graphic image having the second resolution and may render only the graphic image having the first resolution in some cases. Alternatively, a similar operation may be performed in a case in which the graphic image having the second resolution is not rendered and only the graphic image having the first resolution decoded by the GPU 122 or the decoder (not illustrated) is included as the background image.


An embodiment illustrated in FIG. 4 is an embodiment in which the components related to the second graphic image are omitted from the embodiments illustrated in FIGS. 3A and 3B, and may not thus include the components related to the second graphic image as illustrated.


Specifically, the GPU 122 may render the graphic image having the first resolution (decode the graphic image having the first resolution) and write the rendered (or decoded) graphic image to the second memory 132, and may generate an alpha value corresponding to the graphic image having the first resolution and store the generated alpha value in the third memory 133. In this case, the second mixer 127 may receive the video image processed by the video processor 121 and the graphic image having the first resolution read from the second memory 132 with each other based on the alpha value stored in the third memory 133.



FIGS. 5A and 5B are diagrams illustrating a processing sequence according to diverse embodiments of the disclosure.



FIG. 5A illustrates an embodiment in which the low resolution graphic image, for example, the 2K graphic image is first processed, the high resolution background image, for example, the 8K background image is processed, and the two processed images are then mixed, as described in FIG. 3A (or FIG. 3B).



FIG. 5B illustrates an embodiment in which the high resolution background image, for example, the 8K background image is processed, the low resolution graphic image, for example, the 2K graphic image is processed, and the two processed images are then mixed, unlike in FIG. 3A (or FIG. 3B).



FIGS. 6A to 6D are diagrams illustrating a processing sequence according to diverse embodiments of the disclosure.


According to an embodiment illustrated in FIG. 6A, after image quality processing is performed on the video image (611), a low resolution graphic image may be mixed with the image quality processed video image (612). Subsequently, a high resolution background image may be mixed with the mixed image (613). Here, the low resolution and the high resolution may have a relative meaning. For example, the low resolution graphic image may be a 2K or 4K graphic image, and the high resolution background image may be an 8K graphic image.


According to another embodiment illustrated in FIG. 6B, after image quality processing is performed on the video image (621), a high resolution background image may be mixed with the image quality processed video image (622). Subsequently, a low resolution graphic image may be mixed with the mixed image (623).


According to still another embodiment illustrated in FIG. 6C, after a low resolution graphic image is mixed with the video image (631), image quality processing is performed on the mixed image (632). Subsequently, a high resolution background image may be mixed with the image quality processed image (633).


According to still another embodiment illustrated in FIG. 6D, after a high resolution graphic image is mixed with the video image (641), image quality processing is performed on the mixed image (642). Subsequently, a low resolution graphic image may be mixed with the image quality processed image (643).



FIG. 7 is a diagram illustrating one implementation of a display device according to another embodiment of the disclosure.


Referring to FIG. 7, a display device 100″ includes the display 110, the processor 120, the memory 130, an inputter 140, an outputter 150, and a user interface 160. A detailed description for components overlapped with components illustrated in FIG. 2 among components illustrated in FIG. 7 will be omitted.


According to an embodiment of the disclosure, the memory 130 may be implemented as a single memory that stores data generated in various operations according to the disclosure. However, according to another embodiment of the disclosure, the memory 130 may also be implemented to include a plurality of memories.


The memory 130 may store at least a portion of an image input through the inputter 140, various image information required for image processing, for example, texture information for texture processing and edge information for edge processing, and the like. In addition, the memory 130 may also store a final output image to be output through the display 110.


The inputter 140 receives various types of contents. For example, the inputter 140 may receive an image signal from an external device (e.g., a source device), an external storage medium (e.g., a USB memory), an external server (e.g., a web hard), or the like in a streaming or download manner through a communication manner such as AP-based Wi-Fi (Wi-Fi, Wireless LAN Network), Bluetooth, Zigbee, Wired/Wireless Local Area Network (WAN), Wide Area Network (WAN), Ethernet, 5th Generation (5G), IEEE 1394, High-Definition Multimedia Interface (HDMI), Universal Serial Bus (USB), Mobile High-Definition Link (MHL), Audio Engineering Society/European Broadcasting Union (AES/EBU), Optical, Coaxial, or the like. Here, the image signal may be a digital image signal of any one of a standard definition (SD) image, a high definition (HD) image, a full HD image, and an ultra HD image, but is not limited thereto.


The outputter 150 outputs a sound signal. For example, the outputter 150 may convert a digital sound signal processed by the processor 120 into an analog sound signal, and amplify and output the analog sound signal. For example, the outputter 150 may include at least one speaker unit, a D/A converter, an audio amplifier, and the like that may output at least one channel. According to an example, the outputter 150 may be implemented to output various multi-channel sound signals. In this case, the processor 120 may control the outputter 150 to perform and output enhancement processing on the input sound signal so as to correspond to the enhancement processing of the input image. For example, the processor 120 may convert an input two-channel sound signal into a virtual multi-channel (e.g., a 5.1 channel) sound signal, or recognize a position where the display device 100″ is placed and processes the recognized position with a stereo sound signal optimized for a space, or provide an optimized sound signal according to the type (e.g., content genre) of the input image.


The user interface 160 may be implemented as a device such as a button, a touch pad, a mouse, or a keyboard, or may be implemented as a touch screen, a remote controller transceiver, or the like that may also perform the display function described above and a manipulation/input function. The remote controller transceiver may receive a remote controller signal from an external remote controller or transmit the remote controller signal through at least one communication scheme of infrared communication, Bluetooth communication, or Wi-Fi communication.


The display device 100″ may further include a tuner and a demodulator according to implementation. A tuner (not illustrated) may receive a radio frequency (RF) broadcast signal by tuning a channel selected by a user or all previously stored channels among RF broadcast signals received through an antenna. The demodulator (not illustrated) may receive and demodulate a digital IF signal (DIF) converted by the tuner and perform channel decoding. According to an embodiment, the input image received through the tuner may be processed through the demodulator (not illustrated) and then provided to the processor 120 for image processing according to an embodiment of the disclosure.



FIG. 8 is a flowchart illustrating a control method of a display device according to an embodiment of the disclosure.


According to a control method of a display device that outputs contents including a video region and a background region illustrated in FIG. 8, a video image is obtained by processing input video content (S810).


In addition, a plurality of graphic images including each of the plurality of graphic objects are obtained by processing each of the plurality of graphic objects is processed in parallel (S820). According to an example, the plurality of graphic images may be obtained by rendering each of the plurality of graphic objects in parallel or decoding a compressed image including each of the plurality of graphic objects in parallel.


Subsequently, an output image including the video content and the plurality of graphic objects is displayed based on the video image and the plurality of graphic images (S830). Here, the output image may be a 4K or 8K image.


In addition, in S830 in which the output image is displayed, the output image may be obtained by alpha blending the video image and the plurality of graphic images based on an alpha value corresponding to each of the plurality of graphic images.


In this case, the display device is implemented as a modular display device in which a plurality of display modules are connected and resolution of the display device may be changed according to a number of a plurality of display modules.


In addition, in S830 in which the output image is displayed, the output image in which the video content is included in some regions and at least one graphic object is included in the remaining regions may be displayed.


In addition, in S830 in which the output image is displayed, the output image in which a graphic object having a first resolution is included in the remaining regions and a graphic object having a second resolution is included in at least one of some regions or the remaining regions may be displayed. In this case, the second resolution may be less than the first resolution.


In addition, in S830 in which the output image is displayed, the output image may be obtained by up-scaling the graphic image having the second resolution and an alpha value corresponding to the graphic image having the second resolution, and alpha blending the graphic image having the first resolution, the video image, and the up-scaled graphic image based on the up-scaled alpha value.


In addition, the first resolution is the same resolution as that of the display device, and in S820 in which the plurality of graphic images are obtained, the graphic image having the first resolution may be written to the memory at a frequency lower than an output frequency of the display device, the graphic image having the first resolution may be read from the memory at the same frequency as the output frequency of the display device, and the graphic image having the second resolution may be rendered at the output frequency of the display device.


According to the diverse embodiments described above, even though the performance of the GPU is low, the resolution of the graphic provided together with the high resolution video content may be maintained at the high resolution.


In addition, in the case of the high resolution graphic image provided according to the diverse embodiments of the disclosure, a pixel unit motion is possible differently from the up-scaled graphic image. For example, assuming that a motion corresponding to one pixel unit occurs in a plurality of frames, in the case of the up-scaled graphic image, the motion appears in a plurality of pixel units (the number of up-scaled pixels), but in the case of the high resolution graphic image, the motion appears in one pixel unit.


According to diverse embodiments of the disclosure, the resolution of the graphic provided together with the high resolution video content may be maintained at the high resolution.


The diverse embodiments of the disclosure may be applied to all electronic devices capable of processing the image, such as an image receiving device such as a set-top box and an image processing device, as well as the display device.


On the other hand, the methods according to the diverse embodiments of the disclosure described above may be implemented in the form of an application or software installable on an existing display device or image processing device. Alternatively, the methods according to the diverse embodiments of the disclosure described above may be performed using a deep learning based artificial neural network (or deep artificial neural network), that is, a learning network model.


In addition, the methods according to the diverse embodiments of the disclosure described above may be implemented by only upgrading software or hardware of the existing display device or image processing device.


In addition, the diverse embodiments of the disclosure described above may also be performed through an embedded server included in the display device or the image processing device, or an external server of the image processing device.


Meanwhile, according to an embodiment of the disclosure, the diverse embodiments described hereinabove may be implemented by software including instructions that are stored in machine (e.g., a computer)-readable storage media. The machine is a device that invokes the stored instructions from the storage media and is operable according to the invoked instructions, and may include the image processing device (e.g., an image processing device A) according to the disclosed embodiments. When the instructions are executed by the processor, the processor may perform functions corresponding to the instructions, either directly or using other components under the control of the processor. The instructions may include codes generated or executed by a compiler or an interpreter. The machine-readable storage media may be provided in the form of non-transitory storage media. Here, the terms ‘non-transitory’ means that the storage media do not include a signal and is tangible, but do not distinguish whether data is stored semi-permanently or temporarily in the storage media.


In addition, according to an embodiment of the disclosure, the method according to the diverse embodiments described above may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a purchaser. The computer program product may be distributed in the form of a machine readable storage media (e.g., a compact disc read only memory (CD-ROM)), or online through an application store (e.g., PlayStore™). In case of the online distribution, at least a portion of the computer program product may be at least temporarily stored in a storage medium such as a memory of a relay server, a server of an application store, or a relay server, or be temporarily generated.


Each of the components (e.g., modules or programs) according to the diverse embodiments described above may include a single entity or a plurality of entities, and some sub-components of the sub-components described above may be omitted, or other sub-components may be further included in the diverse embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be integrated into one entity to perform the same or similar functions performed by the respective components prior to the integration. The operations performed by the module, the program, or other component, in accordance with the diverse embodiments may be performed in a sequential, parallel, iterative, or heuristic manner, or at least some operations may be executed in a different order or omitted, or other operations may be added.


Although the embodiments of the disclosure have been illustrated and described hereinabove, the disclosure is not limited to the abovementioned specific embodiments, but may be variously modified by those skilled in the art to which the disclosure pertains without departing from the gist of the disclosure as disclosed in the accompanying claims. These modifications should also be understood to fall within the scope and spirit of the disclosure.

Claims
  • 1. A display device comprising: a display; anda processor configured to control the display to display an output image including video content and a plurality of graphic objects,wherein the processor is configured to:obtain a video image by processing input video content,obtain a plurality of graphic images including each of the plurality of graphic objects by processing each of the plurality of graphic objects in parallel, andobtain the output image by mixing the obtained video image and the obtained plurality of graphic images.
  • 2. The display device as claimed in claim 1, wherein the processor is configured to obtain the output image by alpha blending the video image and the plurality of graphic images based on an alpha value corresponding to each of the plurality of graphic images.
  • 3. The display device as claimed in claim 1, wherein the display is implemented as a modular display in which a plurality of display modules are connected, and a resolution of the display is changed according to a number of the plurality of display modules.
  • 4. The display device as claimed in claim 1, wherein the processor is configured to obtain the plurality of graphic images by rendering each of the plurality of graphic objects in parallel or decoding a compressed image including each of the plurality of graphic objects in parallel.
  • 5. The display device as claimed in claim 1, wherein the processor includes: a video processor configured to process the video content; anda graphics processing unit (GPU) configured to process the plurality of graphic objects.
  • 6. The display device as claimed in claim 1, wherein the processor controls the display to display the output image in which the video content is included in first regions and at least one graphic object is included in remaining regions that are not the first regions.
  • 7. The display device as claimed in claim 6, wherein the processor controls the display to display the output image in which a graphic object having a first resolution is included in the remaining regions, and a graphic object having a second resolution is included in at least one of the first regions or the remaining regions, and the second resolution is less than the first resolution.
  • 8. The display device as claimed in claim 7, wherein the processor is configured to obtain the output image by up-scaling the graphic image having the second resolution and an alpha value corresponding to the graphic image having the second resolution, and alpha blending a graphic image including a graphic object having the first resolution, the video image, and the up-scaled graphic image based on the up-scaled alpha value.
  • 9. The display device as claimed in claim 7, further comprising a memory, wherein the first resolution is a same resolution as that of the display, andthe processor is configured to:write the graphic image having the first resolution to the memory at a frequency lower than an output frequency of the display, andread the graphic image having the first resolution from the memory at a same frequency as the output frequency of the display.
  • 10. The display device as claimed in claim 7, wherein the processor includes: a GPU configured to process the graphic image having the second resolution;a first mixer configured to mix the processed graphic image having the second resolution with the video image;a decoder configured to decode the graphic image having the first resolution; anda second mixer configured to mix the decoded graphic image having the first resolution with a first mixed image mixed by the first mixer.
  • 11. The display device as claimed in claim 1, wherein the output image is a high resolution image of 4K or 8K image or more.
  • 12. A control method of a display device, the control method comprising: obtaining a video image by processing input video content;obtaining a plurality of graphic images including each of a plurality of graphic objects by processing each of the plurality of graphic objects in parallel; anddisplaying an output image including the video content and the plurality of graphic objects based on the obtained video image and the obtained plurality of graphic images.
  • 13. The control method as claimed in claim 12, wherein in the displaying of the output image, the output image is obtained by alpha blending the video image and the plurality of graphic images based on an alpha value corresponding to each of the plurality of graphic images.
  • 14. The control method as claimed in claim 12, wherein the display device is implemented as a modular display device in which a plurality of display modules are connected, and a resolution of the display device is changed according to a number of the plurality of display modules.
  • 15. The control method as claimed in claim 12, wherein in the obtaining of the plurality of graphic images, the plurality of graphic images are obtained by rendering each of the plurality of graphic objects in parallel or decoding a compressed image including each of the plurality of graphic objects in parallel.
  • 16. The control method as claimed in claim 12, wherein in the displaying of the output image, the output image in which the video content is included in first regions and at least one graphic object is included in remaining regions that are not the first regions, is displayed.
  • 17. The control method as claimed in claim 16, wherein in the displaying of the output image, the output image in which a graphic object having a first resolution is included in the remaining regions, and a graphic object having a second resolution is included in at least one of first regions or remaining regions that are not the first regions, is displayed, and the second resolution is less than the first resolution.
  • 18. The control method as claimed in claim 17, wherein in the displaying of the output image, the output image is obtained by up-scaling the graphic image having the second resolution and an alpha value corresponding to the graphic image having the second resolution, and alpha blending a graphic image having the first resolution, the video image, and the up-scaled graphic image based on the up-scaled alpha value.
  • 19. The control method as claimed in claim 17, wherein the first resolution is a same resolution as that of the display device, and in the obtaining of the plurality of graphic images,the graphic image having the first resolution is written to the memory at a frequency lower than an output frequency of the display device, andthe graphic image having the first resolution is read from the memory at a same frequency as the output frequency of the display device.
  • 20. A non-transitory computer-readable medium for storing computer instructions that when executed by a processor of a display device, cause the display device to perform operations, wherein the operations include:obtaining a video image by processing input video content,obtaining a plurality of graphic images including each of a plurality of graphic objects by processing each of the plurality of graphic objects in parallel; anddisplaying an output image including the video content and the plurality of graphic objects based on the obtained video image and the obtained plurality of graphic images.
Priority Claims (1)
Number Date Country Kind
10-2019-0082029 Jul 2019 KR national