As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option is an Information Handling System (IHS). An IHS generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes. Because technology and information handling needs and requirements may vary between different applications, IHSs may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in IHSs allow for IHSs to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, global communications, etc. In addition, IHSs may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
IHSs often include (or are coupled to) display devices, such as liquid crystal display (LCD) panels. LCD panels are progressively scanned, meaning that at any given time instant, partial frames of both the previous and current frame are visible on the screen along with a progressively moving tear boundary. This scan and hold characteristic is well-suited for the display of static image content, but is undesirable for the display of video that contains motion. In general, this is due to the inadequate pixel response times of LCD panels.
Each pixel in a LCD panel includes a column of liquid crystal molecules suspended between two transparent electrodes that are in turn sandwiched between two polarizing filters whose axes of polarity are perpendicular to each other. By applying voltage to the transparent electrodes over each pixel, the corresponding liquid crystal molecules are “twisted” by electrostatic forces, allowing varying degrees of light to pass through the polarizing filters. Due to their electro-optical nature, the liquid crystal materials used in LCD panels have inertia and cannot be switched instantaneously. This results in transition response times that are generally not fast enough for high quality video applications. This slow response time, or latency, can result in video motion artifacts that cause quickly moving objects to appear visually blurred, an effect known as “ghosting” or “smearing.”
LCD response times continue to improve, but vendor specifications are generally limited to “off-to-on,” “rise and fall,” or “black-to-white” response time, which is the time it takes a pixel to change from black to white (rise) and then back to black (fall). The voltage required to change a LCD pixel from black to white, or white to black is often greater than the voltage to change a pixel from one shade of grey to another. This disparity in voltage differential is the reason “black-to-white” response time is much faster than “grey-to-grey” response time, which is defined as the time it takes a pixel to change from one shade of grey to another. Grey-to-grey response times for LCD panels can be many times longer (e.g., 30 to 50 milliseconds.) than corresponding “black-to-white” response times.
Video frame rates are typically on the order of 17 milliseconds at 60 Hertz, which can be shorter than liquid crystal “grey-to-grey” response time. These frame rates, when combined with motion within the video frame, can result in video artifacts that cause smearing and low video quality. This problem extends to all LCD displays, but it is more of an issue for LCD panels used in portable IHSs due to their typically lower power consumption and correspondingly slow response times. In addition, due to limited battery life, power adapter capacity, cooling limitations, fan noise and other operational and design constraints, portable IHSs are generally designed to efficiently use computation cycles and minimize the associated overhead required to display an image.
Current approaches to address slow pixel response times include LCD Response Time Compensation (LRTC), which may also be referred to as overdrive compensation. In general, LRTC comprises a technique for mitigating video artifacts that can contribute to smearing when motion video is displayed on a LCD screen. LRTC addresses slow intrinsic response times by imposing an extrinsic overdrive voltage for each pixel to be written, based on the prior and next pixel values and the predetermined characteristics of the LCD panel.
Extended Display Identification Data (EDID), which is a standard published by the Video Electronics Standards Association (VESA), is a metadata format that has been developed for display devices to inform an IHS of their capabilities. An EDID-based data structure may include information, such as the make and model of the display device, supported frame rates, luminance data, and the like. The Display Data Channel (DDC) standard, which is a companion of the EDID, specifies protocols for communication between an IHS and its display device. For instance, the DDC standard may be used to enable the computer host to adjust monitor parameters, such as frame rate, color balance, screen position, brightness, contrast to name a few. The EDID and DDC standards collectively provide a means for optimally reproducing video imagery from video streams generated by the IHS.
Systems and methods for providing fast response time performance with low frame rate video streams are described. In some embodiments, an Information Handling System (IHS) may include a controller and a memory coupled to the controller, the memory having program instructions stored thereon that, upon execution, cause the controller to receive LCD display capability information from a display device, and determine from the received capability information, that a video stream sent to the display device has a lower frame rate than the capabilities of the display device. Using this information the instructions then increase the frame rate of the video stream by repeating each frame during a current time window of the frame.
According to another embodiment, a video frame response time enhancement method includes the steps of receiving LCD display capability information from a display device, and determining, using the received capability information, that a video stream sent to the display device has a lower frame rate than the capabilities of the display device. The method may then increase the frame rate of the video stream by repeating each frame during a current time window of the frame.
According to yet another embodiment, a memory storage device with program instructions stored thereon that upon execution, cause an Information Handling System (IHS) to receive LCD display capability information from a display device, determine, using the received LCD display capability information, that a video stream sent to the display device has a lower frame rate than the capabilities of the display device, and increase the frame rate of the video stream by repeating each frame during a current time window of the frame.
Having thus described the invention in general terms, reference will now be made to the accompanying drawings. The present invention(s) is/are illustrated by way of example and is/are not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity, and have not necessarily been drawn to scale.
According to various embodiments of the present disclosure, a system and method for providing fast response time performance with low latency in Liquid Crystal Displays (LCDs) are described. Conventional techniques for adapting a display device with high frame rate capabilities to function with an IHS that only generates low frame rate video streams have enjoyed limited success, particularly when used with Liquid Crystal Display (LCD) devices that use Response Time Compensation (RTC) for enhancing the display's performance. Embodiments of the present disclosure provide a solution to these problems, among others, by using a pulldown technique in which low frame rate video streams are repeated in a manner which leverages the high frame rate of the LCD display while maintaining a latency of the LCD display at a relatively low level.
As gaming displays gain popularity in the consumer market, consumers are paying more attention to the response time and motion blur of a display, which can be a pivotal factor in how well the display accurately reproduces imagery generated by games, and in particular action games that can generate fast moving imagery. Liquid Crystal Displays (LCDs) have an inherently poor response time relative to other types (e.g., Organic Light Emitting Diode (OLED) displays, cathode ray tube (CRT) displays, etc.).
Various Response Time Compensation (RTC) or Overdrive Compensation (OD) techniques have been developed. Response times are usually measured from grey-to-grey transitions, called as G-to-G response time or Grey Level Response Time (GLRT). Response time compensation (RTC), also known as “overdrive” (OD) technology, may be used to reduce GLRT and improve or reduce motion blur. The OD method can be implemented using a matrix grey level's Look-Up Table (LUT). For example, a 17×17 LUT may be suitable for an 8-bit panel. Because an 8-bit panel has 256 grey levels, and each grey to each grey transition is associated with an OD grey level, there are 272 OD values in the 17×17 LUT.
When using the OD technique, however, an overshoot and/or undershoot effect is inevitably introduced which can significantly impact the human visual system (HVS)'s perception and lead to motion blur or inverse ghosting. On one hand, weak OD with zero-or-less overshoot and/or undershoot does not eliminate the trailing motion artifact and/or ghosting effect which leads to motion blur. On the other hand, strong OD can introduce serious overshoot and/or undershoot effects on the pixel images to cause inverse ghosting where double image, or even color mismatch artifact is observed and leads to poor user experience. These problems are further exacerbated when a high frame rate LCD display is coupled to an IHS that only generates low frame rate video streams. Because the OD techniques are tuned to function at a certain predetermined frame rate, when a low frame rate video stream is encountered, the performance of the video imagery diminishes.
In particular, curve 102 is a first overdriven grey level response curve (GLRC) representing a level of overdrive that is experienced by a high frame rate LCD display operating with a 165 Hertz (6.0 millisecond) frame rate. Curve 104 is a second GLRC representing a level of overdrive that is experienced by a high frame rate LCD display operating with a 60 Hertz (16.7 millisecond) frame rate using the same high frame rate LUT that is used to generate curve 102. Curve 106, on the other hand, is a third GLRC representing a level of overdrive that is experienced by a high frame rate LCD display operating with a 60 Hertz (16.7 millisecond) frame rate using a low frame rate (e.g., 60 Hz) LUT.
Conventional techniques have been implemented to address these issues, but they often engender other problems with reproducing quality imagery.
For purposes of this disclosure, an Information Handling System (IHS) may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an IHS may be a personal computer (e.g., desktop or laptop), tablet computer, mobile device (e.g., Personal Digital Assistant (PDA) or smart phone), server (e.g., blade server or rack server), a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. An IHS may include Random Access Memory (RAM), one or more processing resources such as a Central Processing Unit (CPU) or hardware or software control logic, Read-Only Memory (ROM), and/or other types of nonvolatile memory.
Additional components of an IHS may include one or more disk drives, one or more network ports for communicating with external devices as well as various I/O devices, such as a keyboard, a mouse, touchscreen, and/or a video display. An IHS may also include one or more buses operable to transmit communications between the various hardware components.
Components of IHS 300 may include, but are not limited to, processor 302 (e.g., central processor unit or “CPU”), input/output (I/O) device interface 304, such as a display, a keyboard, a mouse, and associated controllers, hard drive or disk storage 306, various other subsystems 308, network port 310, and system memory 312. Data is transferred between the various system components via various data buses illustrated generally by bus 314. Video optimizer system 318 couples I/O device interface 304 to LCD display 320, as described in more detail below.
In some implementations, IHS 300 may not include each of the components shown in
In this 17×17 LUT, 17 “from” grey levels 0-255 (vertical) are mapped to 17 “to” grey levels 0-255 (horizontal). Each column/row intersection contains, for a particular “from/to” combination, a predetermined OD grey level. For example, if the initial grey level is 176 and the target grey level is 208, the OD grey level value is set to 216 to provide an overshooting boost or compensation. Conversely, if the initial grey level is 96 and the target grey level is 48, the OD grey level value is set to 33 to provide an undershooting boost or compensation.
Timing controller 504 is coupled to row drivers 506 and column drivers 508, which map grey level values to voltage nodes on a series resistance string. Column drivers 508 predetermine the voltage needed at each node to achieve the associated brightness level required to produce the intended grey level value. As grey level commands in digital video stream data 502 are received by timing controller 504, RTC logic 512 retrieves the previous grey level to the corresponding element within the video data stream from FIFO frame buffer 514.
Simultaneously, RTC logic 512 stores the current grey level in FIFO frame buffer 514 for use in the next frame. RTC logic 512 then compares the current and previous grey level commands for each separate red, green and blue (RGB) element using separate RGB look-up tables 516. The contents of RGB look-up tables 516 provide a unique grey level surrogate for each pairing of current and previous grey level commands, which is used to calculate the value of grey level substituted boost.
Grey level substituted boost commands are communicated by RTC logic 512 through data link 518 to column drivers 508, which then produce an override, or “over-drive” command to deliver appropriate higher voltage to the voltage node. Delivering the higher voltage results in compensated response, thereby reducing video artifacts that can contribute to smearing of video images containing motion.
In other embodiments, RTC 512, Frame Buffer FIFO 514, and Look-Up Tables 516 may all be implemented within a scalar processor, which may be disposed outside of the LCD panel module. In those cases, the data input into timing controller 504 may be RTC-processed. That is, the scalar processor may complete the RTC and then pass processed data to timing controller 504 for LCD device 520 to render.
Each frame 606a-c includes an active portion 608a-c in which pixel luminance information is being transmitted, and a blank portion 610a-c in which no pixel information is being transmitted. In one embodiment, frames 606a-c are customized video frames in which the active portions 608a-c are transmitted in a fast mode (e.g., 120 Hertz), while the overall frame duration is maintained at a low frame rate mode (e.g., 60 Hertz). For example, the timing controller 504 may, using the DDC channel, instruct the IHS to send frames 606a-c using such a timing sequence. Embodiments of the present disclosure may provide certain advantages in that, because the action portions 608a-c are transmitted more quickly, a LUT designed for use with high frame rate video streams may be used without any substantial degradation in the performance of the LCD device's capabilities.
The video frame response time enhancement system generates the high frame rate video stream 604 by simultaneously sending each active portion 608a-c to the LCD device as it is being received, and repeatedly sending that same frame 606a-c to the LCD device within the time window of the current frame. For example, the video frame response time enhancement system generates the high frame rate video stream 604 by sending active portions 608a′ and 608a″ to LCD device during the time window of the frame 606a and so on for all frames 606a-c generated by the IHS.
Initially at step 710, the IHS 702 issues a request message to obtain the capabilities of the LCD display 706 from a scalar 704, such as one configured in a LCD device, and in response, receive a response indicating the capabilities of the LCD device 120 at step 712. In one embodiment, the request message and response message comprise Extended Display Identification Data (EDID) formatted messages, which are published by the Video Electronics Standards Association (VESA). For example, the IHS 702 may issue the request message each time the IHS 702 is rebooted and/or the LCD device 706 is connected to the IHS 702, such as when a HDMI cable is plugged into the IHS 702. In one embodiment, the method 700 may instruct video optimizer system 318 and/or I/O interface 304 to generate customized video frames in which the active portions 608a-c of each frame 606a-c are transmitted in a fast mode (e.g., 120 Hertz), while the overall frame duration is maintained at a low mode (e.g., 60 Hertz).
Thereafter at step 714, the method 700 transmits the active portion 608a of a first frame 606a to the scalar 704 in which the scalar 704 simultaneously forwards the received active portion 608a to the LCD device 706 at step 716. During this time, the scalar 704 stores the received active portion 608a in a buffer or other suitable storage device. Because the active portion 608a has been transmitted to the scalar 704 at the fast frame rate (e.g., 8.3 milliseconds), only half (e.g., 50 percent) of the current time window has transpired. Thus at step 718, the scalar may transmit the stored active portion 608a to the LCD device 706 during the second half portion of the current time window of the first frame 606a.
To process the second frame 606b, the method 700 transmits the active portion 608b of a second frame 606b to the scalar 704 at step 720 in which the scalar 704 simultaneously forwards the received active portion 608b to the LCD device 706 at step 722. During this time, the scalar 704 stores the received active portion 608b in the buffer. Again, because the active portion 608b has been transmitted to the scalar 704 at the fast frame rate (e.g., 8.3 milliseconds), only half (e.g., 50 percent) of the current time window has transpired. Thus at step 724, the scalar may transmit the stored active portion 608b to the LCD device 706 during the second half portion of the current time window of the second frame 606b.
To process the third frame 606c, the method 700 transmits the active portion 608c of a third frame 606c to the scalar 704 at step 726 in which the scalar 704 simultaneously forwards the received active portion 608c to the LCD device 706 at step 728. During this time, the scalar 704 stores the received active portion 608c in the buffer. Thereafter at step 730, the scalar 704 may transmit the stored active portion 608c to the LCD device 706 during the second half portion of the current time window of the third frame 606c.
Although the method 700 only describes only three video frames 606a-c that may be processed, it should be understood that the method 700 continues to continually process video frames 606a-c at an ongoing basis to render a high frame rate video stream from a low frame rate video stream by repeating active portions 608a-c during the current time window of the current frame 606a-c. Nevertheless, when use of the method 700 is no longer needed or desired, the method 700 ends.
Although
The low frame rate video stream 802 may be one that is generated by the IHS 300 and transmitted to the scalar device of the LCD device, and the high frame rate video stream 804 may be one that is sent from the scalar device to the LCD device, such as the scalar device and LCD display 520 as described above with respect to
The video frame response time enhancement system generates the high frame rate video stream 804 by simultaneously sending each active portion 808a-d to the LCD display as it is being received, and repeatedly sending that same frame 806a-d to the LCD display within the time window of the current frame. For example, the video frame response time enhancement system generates the high frame rate video stream 804 by sending active portions 808a′ and repeated frames 808a″ to the LCD display during the time window of the frame 806a and so on for all frames 806a-d generated by the IHS 300.
For the process of converting a 60 Hertz frame rate video stream 802 to a 165 Hertz frame rate video stream 804, a series of four 60 Hertz video frames will be used to convert to eleven 165 Hertz frame rate video frames 812. For example, the active portion 808a of a first video frame 806a is replicated twice to form three 165 Hertz video frames 812 (e.g., one frame 806a′ and two frames 806a″) that are generated within the time window of the first video frame 806a, the active portion 808b of a second video frame 806b is replicated twice to form three 165 Hertz video frames 812 (e.g., one frame 806b′ and two frames 806b″) that are generated within the time window of the second video frame 806b, the active portion 808c of a third video frame 806c is replicated twice to form three 165 Hertz video frames 812 (e.g., one frame 806c′ and two frames 806c″) that are generated within the time window of the third video frame 806c, and the active portion 808d of a fourth video frame 806n is replicated once to form two 165 Hertz video frames 812 (e.g., one frame 806d′ and one frame 806d″) that are generated within the time window of the fourth video frame 806d. Thus, eleven 165 Hertz video frames 812 may be generated from four 60 Hertz video frames 806a-d to form the high frame rate video stream 804. Moreover, the sequence of four 60 Hertz video frames 806a-d being converted to eleven 165 Hertz frame rate video frames 812 may be repeated at ongoing intervals to generate the 165 Hertz frame rate video stream 804.
Initially at step 910, the IHS 902 issues a request message to obtain the capabilities of the LCD display 906 from a scalar 904, such as one configured in the LCD display 906, and in response, receives a response indicating the capabilities of the LCD display 906 at step 912. In one embodiment, the request message and response message comprise Extended Display Identification Data (EDID) formatted messages, such as described above with reference to
At step 914, the method 900 transmits the active portion 808a of a first frame 806a to the scalar 904 in which the scalar 904 simultaneously forwards the received active portion 808a′ to the LCD display 906 at step 916. During this time, the scalar 904 stores the received active portion 808a in a buffer or other suitable storage device. Because the active portion 808a has been transmitted to the scalar 904 at the fast frame rate (e.g., 6.0 milliseconds), only approximately a third of the current time window has transpired. Thus at steps 918 and 920, the scalar 904 may transmit the stored active portion 808a′ to the LCD display 906 twice during the following two-thirds (⅔) of the current time window of the first frame 806a.
To process the second frame 806b, the method 900 transmits the active portion 808b of a second frame 806b to the scalar 904 at step 922 in which the scalar 904 simultaneously forwards the received active portion 808b′ to the LCD display 906 at step 924. During this time, the scalar 904 stores the received active portion 808b in the buffer. Again, because the active portion 808b has been transmitted to the scalar 904 at the fast frame rate (e.g., 6.0 milliseconds); thus, at step 926 and 928, the scalar may transmit the stored active portion 808b″ to the LCD display 906 twice during the second two/thirds of the current time window of the second frame 806b.
To process the third frame 806c, the method 900 transmits the active portion 808c of a third frame 806c to the scalar 904 at step 930 in which the scalar 904 simultaneously forwards the received active portion 808c′ to the LCD display 906 at step 932. During this time, the scalar 904 stores the received active portion 808c in the buffer. Thereafter at steps 934 and 936, the scalar 904 may transmit the stored active portion 808c″ to the LCD display 906 twice during the second two/thirds of the current time window of the third frame 806c.
To process the fourth frame 806d, the method 900 transmits the active portion 808d of a fourth frame 806d to the scalar 904 at step 938 in which the scalar 904 simultaneously forwards the received active portion 808d′ to the LCD display 906 at step 940. During this time, the scalar 904 stores the received active portion 808d in the buffer. Thereafter at steps 942, the scalar 904 may transmit the stored active portion 808d″ to the LCD display 906 once during the remaining portion of the current time window of the fourth frame 806d.
Although the method 900 only describes how four low frame rate video frames 806a-d may be processed to generate eleven high frame rate video frames 812, it should be understood that the method 900 continues to continually process video frames 806a-d at an ongoing basis to render a high frame rate video stream from a low frame rate video stream by repeating active portions 808a-d one or more times during the current time window of the current frame 806a-d. Nevertheless, when use of the method 900 is no longer needed or desired, the method 900 ends.
Although
It should be understood that various operations described herein may be implemented in software executed by processing circuitry, hardware, or a combination thereof. The order in which each operation of a given method is performed may be changed, and various operations may be added, reordered, combined, omitted, modified, etc. It is intended that the invention(s) described herein embrace all such modifications and changes and, accordingly, the above description should be regarded in an illustrative rather than a restrictive sense.
The terms “tangible” and “non-transitory,” as used herein, are intended to describe a computer-readable storage medium (or “memory”) excluding propagating electromagnetic signals; but are not intended to otherwise limit the type of physical computer-readable storage device that is encompassed by the phrase computer-readable medium or memory. For instance, the terms “non-transitory computer readable medium” or “tangible memory” are intended to encompass types of storage devices that do not necessarily store information permanently, including, for example, RAM. Program instructions and data stored on a tangible computer-accessible storage medium in non-transitory form may afterwards be transmitted by transmission media or signals such as electrical, electromagnetic, or digital signals, which may be conveyed via a communication medium such as a network and/or a wireless link.
Although the invention(s) is/are described herein with reference to specific embodiments, various modifications and changes can be made without departing from the scope of the present invention(s), as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present invention(s). Any benefits, advantages, or solutions to problems that are described herein with regard to specific embodiments are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The terms “coupled” or “operably coupled” are defined as connected, although not necessarily directly, and not necessarily mechanically. The terms “a” and “an” are defined as one or more unless stated otherwise. The terms “comprise” (and any form of comprise, such as “comprises” and “comprising”), “have” (and any form of have, such as “has” and “having”), “include” (and any form of include, such as “includes” and “including”) and “contain” (and any form of contain, such as “contains” and “containing”) are open-ended linking verbs. As a result, a system, device, or apparatus that “comprises,” “has,” “includes” or “contains” one or more elements possesses those one or more elements but is not limited to possessing only those one or more elements. Similarly, a method or process that “comprises,” “has,” “includes” or “contains” one or more operations possesses those one or more operations but is not limited to possessing only those one or more operations.
Number | Name | Date | Kind |
---|---|---|---|
6532024 | Everett | Mar 2003 | B1 |
20040243940 | Lee | Dec 2004 | A1 |
20050052394 | Waterman | Mar 2005 | A1 |
20050160367 | Sirota | Jul 2005 | A1 |
20060282737 | Shi | Dec 2006 | A1 |
20070242160 | Garg | Oct 2007 | A1 |
20150339994 | Verbeure | Nov 2015 | A1 |
Number | Date | Country | |
---|---|---|---|
20230260477 A1 | Aug 2023 | US |