This application relates generally to images; more specifically, this application relates to metadata-based power management in displays.
As used herein, the term “metadata” relates to any auxiliary information that is transmitted as part of a coded bitstream and that assists a decoder to render a decoded image. Such metadata may include, but are not limited to, color space or gamut information, reference display parameters, and auxiliary signal parameters, as those described herein.
In practice, images comprise one or more color components (e.g., RGB, luma Y and chroma Cb and Cr) where, in a quantized digital system, each color component is represented by a precision of n-bits per pixel (e.g., n=8). A bit depth of n≤8 (e.g., color 24-bit JPEG images) may be used with images of standard dynamic range (SDR), while a bit depth of n≥8 may be considered for images of enhanced dynamic range (EDR) to avoid contouring and staircase artifacts. In addition to integer datatypes, EDR and high dynamic range (HDR) images may also be stored and distributed using high-precision (e.g., 16-bit) floating-point formats, such as the OpenEXR file format developed by Industrial Light and Magic.
Many consumer desktop displays render non-EDR content at maximum luminance of 200 to 300 cd/m2 (“nits”) and consumer high-definition and ultra-high definition televisions (“HDTV” and “UHD TV”) from 300 to 400 nits. Such display output thus typify a low dynamic range (LDR), also referred to as SDR, in relation to HDR or EDR. As the availability of EDR content grows due to advances in both capture equipment (e.g., cameras) and EDR displays (e.g., the Sony Trimaster HX 31″ 4K HDR Master Monitor), EDR content may be color graded and displayed on EDR displays that support higher dynamic ranges (e.g., from 700 nits to 5000 nits or more). In general, the systems and methods described herein relate to any dynamic range.
Regardless of dynamic range, video content comprises a series of still images (frames) that may be grouped into sequences, such as shots and scenes. A shot is, for example, a set of temporally-connected frames. Shots may be separated by “shot cuts” (e.g., timepoints at which the whole content of the image changes instead of only a part of it). A scene is, for example, a sequence of shots that describe a storytelling segment of the larger content. In one particular example where the video content is an action movie, the video content may include (among others) a chase scene which in turn includes a series of shots (e.g., a shot of a driver of a pursuing vehicle, a shot of the driver of a pursued vehicle, a shot of a street where the chase takes place, and so on).
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, issues identified with respect to one or more approaches should not be assumed to have been recognized in any prior art on the basis of this section, unless otherwise indicated.
Various aspects of the present disclosure relate to circuits, systems, and methods for image processing, including metadata-based power management in displays.
In one exemplary aspect of the present disclosure, there is provided a method, comprising: receiving an image data and a power metadata, wherein the power metadata includes information relating to a power consumption or an expected power consumption; determining, based on the power metadata, an amount and a duration of a drive modification that may be performed by a target display in response to the power consumption or the expected power consumption; and performing a power management of the target display based on the power metadata to modify a driving of at least one light-emitting element associated with the target display relative to a manufacturer-determined threshold, based on a result of the determining, wherein the power metadata includes at least one of a temporal luminance energy metadata, a spatial luminance energy metadata, a spatial temporal fluctuation metadata, or combinations thereof.
In another exemplary aspect of the present disclosure, there is provided an apparatus, comprising a display including at least one light-emitting element; and display management circuitry configured to: receive a power metadata, wherein the power metadata includes information relating to a power consumption or an expected power consumption, determine, based on the power metadata, an amount and a duration of a drive modification that may be performed by the display in response to the power consumption or the expected power consumption, and perform a power management of the display based on the power metadata to modify a driving of the at least one light-emitting element relative to a manufacturer-determined threshold, based on a result of the determining, wherein the power metadata includes at least one of a temporal luminance energy metadata, a spatial luminance energy metadata, a spatial temporal fluctuation metadata, or combinations thereof.
In this manner, various aspects of the present disclosure provide for improvements in at least the technical fields of image processing and display, as well as the related technical fields of image capture, encoding, and broadcast.
These and other more detailed and specific features of various embodiments are more fully disclosed in the following description, reference being had to the accompanying drawings, in which:
This disclosure and aspects thereof can be embodied in various forms, including hardware or circuits controlled by computer-implemented methods, computer program products, computer systems and networks, user interfaces, and application programming interfaces; as well as hardware-implemented methods, signal processing circuits, memory arrays, application specific integrated circuits, field programmable gate arrays, and the like. The foregoing summary is intended solely to give a general idea of various aspects of the present disclosure, and does not limit the scope of the disclosure in any way.
In the following description, numerous details are set forth, such as spectra, timings, operations, and the like, in order to provide an understanding of one or more aspects of the present disclosure. It will be readily apparent to one skilled in the art that these specific details are merely exemplary and not intended to limit the scope of this application.
Moreover, while the present disclosure focuses mainly on examples in which the various elements are used in consumer display systems, it will be understood that this is merely one example of an implementation. It will further be understood that the disclosed systems and methods can be used in any device in which there is a need to display image data; for example, cinema, consumer and other commercial projection systems, smartphone and other consumer electronic devices, heads-up displays, virtual reality displays, and the like.
Display devices include several components, including light-emitting pixels in self-emissive display technologies such as organic light emitting displays (OLEDs) or plasma display panels (PDPs), or backlights in other display technologies that use transmissive light modulators such as liquid crystal displays (LCDs). In such devices, if various components are driven beyond their technical and physical limitations, the expected behavior such as color rendition might suffer and the failure rate the display system increases. Such driving may result in temporary or permanent component failure. To remedy this, some component manufacturers (often referred to as original equipment manufacturers or OEMs) may limit the technical capabilities by applying operation thresholds. For example, component manufacturers may apply thresholds related to power consumption for components like light emitting diodes (LEDs), LED driver chips, power supplies, and the like. Additionally or alternatively, component manufacturers may apply thresholds related to thermal properties, such as spatial heat propagation through the display chassis.
These thresholds are typically conservative in order to avoid potential public relations or branding issues, such as if a comparatively rare failure is the subject of unflattering press; and to prevent an increase in serve calls to the component manufacturer's support and customer service groups, thus attempting to prevent an increase in cost to the component manufacturer. However, the thresholds may be so conservative that they do not actually approach the technical limits of the display system. Component manufacturers may choose to make the thresholds conservative because content properties that relate to energy consumption are not known ahead of playback in comparative examples. Therefore, energy management parameters in display devices are often assessed in real-time; for example, the signal input may be analyzed at or immediately before display time.
However, if the power consumption that occurs or is expected to occur during content playback is known ahead of time, the power management system in the display device may be able to modify a driving of the display (e.g., adjust the luminance rendering requirements of the content). Some non-limiting examples of adjustments include limiting luminance to conserve power (e.g., if the device is operating on battery power) and/or exceeding the maximum luminance output as determined by the manufacturer-determined safety thresholds if the duration of any such overdrive is known to cause no long-term harm to the display system or its components. These may be referred to as performing an “underdrive” or an “overdrive.” In some examples, an assessment of the overdrive (or underdrive) level and duration may be performed during a content production or content delivery process, and then a light-emitting element of the display system may be selectively overdriven (or underdrive) as a result of the assessment.
As illustrated in
In the example illustrated in
The video data in the production stream 112 is then provided to a processor or processors at the post-production block 103 for post-production editing. Editing performed at the post-production block 103 may include adjusting or modifying colors or brightness in particular areas of an image to enhance the image quality or achieve a particular appearance for the image in accordance with the video creator's (or editor's) creative intent. This may be referred to as “color timing” or “color grading.” Other editing (e.g., scene selection and sequencing, image cropping, addition of computer-generated visual special effects or overlays, etc.) may be performed at the post-production block 103 to yield a distribution stream 124. In some examples, the post-production block 103 may provide an intermediate stream 125 to the reference display 111 to allow images to be viewed on the screen thereof, for example to assist in the editing process. One, two, or all of the production block 102, the post-production block 103, and the encoding block 104 may further include processing to add metadata to the video data. This further processing may include, but is not limited to, a statistical analysis of content properties. The further processing may be carried out locally or remotely (e.g., cloud-based processing).
Following the post-production operations, the distribution stream 124 may be delivered to the encoding block 104 for downstream delivery to decoding and playback devices such as television sets, set-top boxes, movie theaters, laptop computers, tablet computers, and the like. In some examples, the encoding block 104 may include audio and video encoders, such as those defined by Advanced Television Systems Committee (ATSC), Digital Video Broadcasting (DVB), Digital Versatile Disc (DVD), Blu-Ray, and other delivery formats, thereby to generate a coded bitstream 126. In a receiver, the coded bitstream 126 is decoded by the decoding unit 105 to generate a decoded signal 127 representing an identical or close approximation of the distribution stream 124. The receiver may be attached to the target display 112, which may have characteristics which are different than the reference display 111. Where the reference display 111 and the target display 112 have different characteristics, the display management block 106 may be used to map the dynamic range or other characteristics of the decoded signal 127 to the characteristics of the target display 112 by generating a display-mapped signal 128. The display management block 106 may additionally or alternatively be used to provide power management of the target display 112.
The target display 112 generates an image using an array of pixels. The particular array structure depends on the architecture and resolution of the display. For example, if the target display 112 operates on an LCD architecture, it may include a comparatively-low-resolution backlight array (e.g., an array of LED or other light-emitting elements) and a comparatively-high-resolution liquid crystal array and color filter array to selectively attenuate white light from the backlight array and provide color light (often referred to as dual-modulation display technology). If the target display 112 operates on an OLED architecture, it may include a high-resolution array of self-emissive color pixels.
The link between the upstream blocks and the downstream blocks (i.e., the path over which the coded bitstream 126 is provided) may be embodied by a live or real-time transfer, such as a broadcast over the air using electromagnetic waves or via a content delivery line such as fiber optic, twisted pair (ethernet), and/or coaxial cables. In other examples, the link may be embodied by a time-independent transfer, such as recording the coded bitstream onto a physical medium (e.g., a DVD or hard disk) for physical delivery to an end-user device (e.g., a DVD player). The decoder block 105 and display management block 106 may be incorporated into a device associated with the target display 112; for example, in the form of a Smart TV which includes decoding, display management, power management, and display functions. In some examples, the decoder block 105 and/or display management block 106 may be incorporated into a device separate from the target display 112; for example, in the form of a set-top box or media player.
The decoder block 105 and/or the display management block 106 may be configured to receive, analyze, and operate in response to the metadata included or added at the upstream blocks. Such metadata may thus be used to provide additional control or management of the target display 112. The metadata may include image-forming metadata (e.g., Dolby Vision metadata) and/or non-image-forming metadata (e.g., power metadata).
As noted above, metadata (including power metadata) may be generated in one or more of the upstream blocks illustrated in
Temporal luminance energy metadata, as used herein, may include information related to the temporal luminance energy of a particular frame or frames of the image data. For example, the temporal luminance energy metadata may provide a snapshot of the total luminance budget utilized by each content frame. This may be represented as a summation of the luminance values of all pixels in a given frame. In some examples, the above may also be resampled so as to be independent of the resolution of the target display 112 (i.e., to accommodate for 1080p, 2 k, 4 k, and 8 k display resolutions). The temporal luminance energy metadata included within a given frame of the coded bitstream 126 may include information related to future frames. In one example, the temporal luminance energy metadata included within a given frame may include temporal luminance energy information for the following 500 frames. In another example, the temporal luminance energy metadata included within the given frame may include temporal luminance energy information for a larger or smaller number of subsequent frames. Transmission of the temporal luminance energy metadata thus may not be performed for each frame in the coded bitstream 126, but instead may be intermittently transmitted. In some examples, where the temporal luminance energy metadata included within a given frame includes temporal luminance energy for the following N frames, it may be transmitted with the coded bitstream 126 at a period shorter than N (e.g., N/2, N/3, N/4, and so on). The more frequently the temporal luminance energy metadata is transmitted, the more robust the metadata scheme is to latency or other data transmission errors. However, the less frequently the temporal luminance energy metadata is transmitted, the less data bandwidth is used to transmit the metadata. One exemplary relationship between the frequency of metadata transmission and data bandwidth used will be described in more detail below with regard to
By transmitting the frame-based luminance energy for future frames ahead of time, the display power manager (e.g., the display management block 106) can decide based on the temporal progression of luminance energy how to map the content most effectively to maintain the director's intent while utilizing the hardware capabilities to the fullest. This may include deciding to overdrive (or underdrive) some or all of the light-emitting elements in the end-user display (e.g., the target display 112) for particular scenes or shots, deciding to reduce the luminance of select or all pixels to preserve electrical energy (e.g., from a battery), determining a time period for panel cooldown after a time of intense use or between periods of overdriving, and so on.
In expression (1) above, x corresponds to the x-coordinate of a pixel in the array, y corresponds to the y-coordinate of a pixel in the array, and Lxyi represents the luminance of pixel (x,y) for frame i. In expression (1), each frame includes n×m pixels.
At operation 203, it is determined whether the shot is complete. This may be accomplished by comparing the value i of the current frame to a maximum value P representing the total number of frames in the shot. If it is determined that the shot is not complete, the frame i is incremented by 1 at operation 204 and the process flow returns to operation 202 to calculate the quantity Lsum,i for the new frame. If it is determined that the shot is complete, then the quantity Lsum,temporal is generated. The quantity Lsum,temporal corresponds to the frame-by-frame luminance sum for the entire shot, and may be represented as a one-dimensional data array indicating the quantity Lsum,i for each frame i from i=1 to i=P.
Spatial or temporal luminance energy metadata may include information relating to the total luminance energy of a particular pixel with a particular coordinate xy or pixels of the image data across an entire scene or shot. In some display technologies, excess heat must be transported out of the display housing in order to prevent damage to display device components. For example, in many physical displays the lower center portion of the display exhibits the greatest sensitivity to excessive heat or heat buildup, because the latent energy must travel past a large part of the remaining display panel before it can exit the housing on the top or sides. To avoid problems, many component manufacturers limit the heat buildup by globally (temporally and/or spatially) limiting the luminance output for comparative display systems in which the comparative system's power manager does not have information regarding the luminance requirements at future frames. In the case of spatial luminance energy metadata, by providing an end-user display with spatial luminance energy metadata, the display power manager (e.g., the display management block 106) can decide based on the position and intensity or duration of the pixels how much to drive (or even overdrive or underdrive) the light-emitting elements in the end-user display (e.g., the target display 112).
In expression (2) above, x, y, and Lxyi represents the same quantities as described above with reference to expression (1). Operation 302 may be performed repeatedly, incrementing the y coordinate by 1 each iteration until all pixels of the row have been analyzed.
At operation 303, it is determined whether the row of pixels is complete. This may be accomplished by comparing the value x of the current pixel to a maximum value n representing the total number of rows in the array. If it is determined that the row is not complete, the x coordinate of the pixel is incremented by 1 and the y coordinate of the pixel is reinitialized to 1 at operation 304, and the process flow returns to operation 302 to calculate the quantity Lsum,xy for the new pixel. If it is determined that the row is complete, then at operation 305 it is determined whether all rows have been analyzed. This may be accomplished by comparing the value y of the current pixel to a maximum value m representing the total number of columns in the array. If it is determined that the row is not the final row, then the x coordinate of the pixel is reinitialized to 1 and the y coordinate of the pixel is incremented by 1 at operation 306, and the process flow returns to operation 302 to calculate the quantity Lsum,xy for the new pixel. If it is determined that the row is the final row, then at operation 307 the quantity Lsum,spatial is generated. The quantity Lsum,spatial corresponds to the frame-by-frame luminance sum for each pixel for the entire shot, and may be represented as a two-dimensional data array indicating the quantity Lsum,xy for each pixel.
While
Light-emitting elements which provide illumination for the bright regions (e.g., a backlight LED in an LCD architecture or a group of OLED pixels in an OLED architecture) tend to consume more power and/or to consume power over a longer time if high luminance image parts are present that are also presented at the same part of the display over a prolonged time. In the absence of spatial luminance energy metadata and appropriate management, this may cause stress to components (e.g., the light-emitting elements themselves, drivers, circuit board traces, and the like), latent heat generation that flows upwards and must be removed from the housing, active dimming of pixels or the entire screen, and so on. By providing the target display 112 with spatial luminance energy metadata of a shot prior to the rendering and display of the shot, these problems and/or any component damage may be prevented.
In addition to or as an alternative to calculating the spatial luminance energy metadata, spatial temporal fluctuation metadata may be calculated. The spatial temporal fluctuation metadata may include information relating to the energy fluctuation of a particular pixel or pixels of the image data across an entire scene or shot. For example, a pixel that remains at nearly the same luminance level throughout the scene or shot would have a low degree of energy fluctuation whereas a pixel that varies its luminance level (e.g., to display a bright high-frequency strobe light) would have a high degree of energy fluctuation.
The spatial temporal fluctuation metadata may be calculated by a similar method as illustrated in
In expression (3), σ represents the standard deviation function. In some examples, the spatial luminance energy metadata and the spatial temporal fluctuation metadata may both be calculated at operation 302. In other examples, the process flow of
In some implementations, the power metadata described above may be transported as part of the coded bitstream 126, along with actual image data and any additional metadata that may be present. In other implementations, the power metadata may be transported by a different transmission path (“side-loaded”) than the actual image data; for example, the power metadata may be transported via TCP/IP, Bluetooth, or another communication standard from the internet or another distribution device.
As noted above, the power metadata (including temporal luminance energy metadata, spatial luminance energy metadata, spatial temporal fluctuation metadata, and combinations thereof) are types of non-image-forming metadata. In other words, it is possible to render images without the power metadata or with only a partial set of power metadata. Because of this, it is possible to encode less than the full set of power metadata into each and every content frame, in contrast to the case with image-forming metadata that is used to rendering the image accurately. The power metadata may be embedded out of order or in pieces. Moreover, missing portions of the power metadata may be interpolated from present portions of the power metadata or simply ignored without negatively impacting fundamental image fidelity.
In one example of the present disclosure, the power metadata is segmented and transported (e.g., as part of the coded bitstream 126) in pieces or pages per content frame.
The amount of frames (i.e., N) budgeted to transport the power metadata 402 is based on the size of their payload and bandwidth allocation for this particular metadata type. Each piece of the power metadata 402 may not have the same length (i.e., amount of total bytes) as the content's frame interval and thus the rate (bytes/frame) for the power metadata 402 might not be the same as the rate for the image-forming metadata 401. Moreover, in examples where temporal luminance energy metadata, spatial luminance energy metadata, and spatial temporal fluctuation metadata are all implemented, some types of the power metadata may be calculated or derived from other types of the power metadata.
The next tier is temporal luminance energy metadata 503. The temporal luminance energy metadata 503 includes information relating to a luminance energy for each frame in a shot. Thus, each block of the temporal luminance energy 503 may correspond to the temporal luminance energy metadata 212 described above with regard to
The bottom tier is spatial luminance energy metadata 504. The spatial luminance energy metadata 504 includes information relating to a luminance energy for each pixel over the duration of an individual shot. Thus, each block of the spatial luminance energy metadata may correspond to the spatial luminance energy metadata 312 described above with regard to
There may be an inverse relationship between the data payload and the transmission frequency for a given type of metadata. Moreover, there may be an inverse relationship between the data payload and the proximity to the actual image data described by a given type of metadata. For example, because the total luminance metadata 501 has a very small data payload (e.g., a single number), it may be repeated in the coded bitstream 126 very often and might not be transmitted very near the image frames described therein. Because the shot luminance metadata 502 has a small data payload, it may be repeated in the coded bitstream 126 often but less often than the total luminance metadata 501 and similarly might not be transmitted very near the image frames described therein. Moreover, in some examples, the shot luminance metadata 502 may only describe a subset of the total number of shots, with shot luminance metadata 502 corresponding to earlier shots being transmitted prior to shot luminance metadata 502 corresponding to later shots.
In some examples, only some types of metadata are directly calculated and other types of metadata are derived therefrom. For example, the temporal luminance energy metadata 503 may be calculated (e.g., in a manner as described above with regard to
As an alternative to or in addition to repeating significant power metadata in a predetermined order and/or at predetermined intervals, other transmission ordering may be implemented. For example, if the content is submitted as a 1:1 stream, the power metadata may be dynamically added to the content stream and may be dynamically adjusted by the playout server (e.g., one or more of the upstream blocks illustrated in
Upon receipt of the coded bitstream 126, the downstream blocks illustrated in
In the example illustrated in
In some examples, the power receiver continually outputs target metadata (e.g., the power metadata that will be received and used by the target display 112). The target metadata may include a first target flag data indicating the maximum scaled luminance for a given frame, where 1 indicates no overdriving, and a second target flag data indicating the absolute maximum luminance at the shot's average picture level (APL). While the maximum scaled luminance and the absolute maximum luminance are the same in the particular example illustrated in
The power receiver may further output data regarding a charge status of supercapacitors or other fast-discharging energy storage device, in the event that the target display 112 implements supercapacitors or other such devices to overdrive (or underdrive) one or more light-emitting elements. Where the energy storage devices are supercapacitors, this data instructs the target display 112 to begin charging the supercapacitors at a particular time such that the supercapacitors will be sufficiently charged when overdriving is scheduled to begin. In some examples, the data may instead instruct the target display 112 to charge the supercapacitors well in advance of the overdrive request and maintain the charge state until a discharge request is received, indicating that the light-emitting elements are to be overdriven. In some examples, the target display 112 itself may determine how far in advance to begin charging the supercapacitors. As will be understood and appreciated by the skilled person, the above examples of overdriving one or more light-emitting elements (e.g., by charging the supercapacitors well in advance) may analogously or similarly applied to underdriving the one or more light-emitting elements, e.g., by discharging the supercapacitors, or the like.
Power metadata (e.g., the source metadata and/or the target metadata described above) may be stored in a buffer or other memory associated with one or more of the downstream blocks illustrated in
The buffer may also store a configuration file which describes various setting parameters unique to the target display 112 and its hardware properties. For example, the configuration file may include information about one or more of the following: power consumption specifications including a maximum load of the power supply unit, driver chips, light-emitting elements, and so on; cool-down time of the light-emitting or power electronics (LED drivers, etc.) elements; spatial heat transfer as a function of localized heat generation inside the display housing; a maximum overdrive duration of the display, which may be a function of the overdrive level; the presence of supercapacitors and, if present, their capacity, depletion rate, and charge rate; and the like. The configuration file may also be wholly or partly updateable, for example to implement a usage counter and thereby provide information regarding the age or level of wear of the display. In some examples, one or more ambient condition sensors (e.g., temperature sensors, humidity sensors, ambient light sensors, and the like) may be provided to detect corresponding ambient conditions, and information detected by the one or more ambient condition sensors may be stored in or alongside the configuration file to facilitate a determination of the level of wear of the display. This real-time sensor information may also be used to influence the display power management system (e.g., to influence the overdriving or underdriving) to avoid image fidelity artifacts. One example is to avoid underdriving the pixels while the ambient light level is high.
The various approaches, systems, methods, and devices described herein may implement power metadata to influence target display behavior in the above described ways without limitation. That is, various aspects of the present disclosure may be used to influence display management mapping behavior (e.g., limiting the luminance output, deviating from the baseline mapping, and the like); to overdrive a backlight unit or (in self-emissive display technologies) the pixels themselves and thereby increase the maximum luminance of individual pixels, pixel groups, or the entire panel beyond overly-conservative manufacturer-set limits, while avoiding excessive taxation on the power supply unit; to increase granularity for display power management systems, for example to manage thermal panel or backlight properties based on spatial and/or temporal power and energy expectations; to provide trim-pass-like behavior and represent luminance levels after the signal has been tone-mapped by the target device, to manage power in multi-display systems; to intelligently limit display power usage for regulatory (e.g., Energy Star compliance) purposes or power saving (e.g., on battery operated devices); and so on.
A trim pass is a feature which facilitates the human override of the mapping parameters which would otherwise be determined by a computer algorithm (e.g., an algorithm which generates one or more portions of the power metadata). In some examples, the override may be carried out during the color grading process to ensure that a certain look is provided or preserved after determining whether the result of the computer algorithm covers the video or content creator's intent for a particular target display dynamic range bracket (e.g., at a display max of 400 nits). Thus, the power metadata may be updated to include information that would cause the target display to alter or disable the algorithmic recommendation for one or more shots or scenes.
To implement this, the trim-pass-like behavior may be realized by a configuration in which the target display system utilizes the power metadata according to its current playout luminance bracket. If the display maps to a non-default target luminance bracket, the display power management system may be configured to decide the trim-pass accordingly. For example, if the display transitions from a default mapping to a boost mode mapping (e.g., an overdrive), the display power management system may switch from a lower luminance energy trim-pass to a higher one.
In one particular example, during the generation of power metadata the algorithm may indicate that underdriving should be performed for a particular shot. However, underdriving for the particular shot in question may be inadvisable for narrative or other reasons. Therefore, a color grader (human or otherwise) may modify or supplement the power metadata to thereby cause the display power management system to drive (rather than underdrive) the target display, despite the initial output of the algorithm.
Systems and devices in accordance with the present disclosure may take any one or more of the following configurations.
(1) A method, comprising: receiving an image data and a power metadata, wherein the power metadata includes information relating to a power consumption or an expected power consumption; determining, based on the power metadata, an amount and a duration of a drive modification that may be performed by a target display in response to the power consumption or the expected power consumption; and performing a power management of the target display based on the power metadata to modify a driving of at least one light-emitting element associated with the target display relative to a manufacturer-determined threshold, based on a result of the determining, wherein the power metadata includes at least one of a temporal luminance energy metadata, a spatial luminance energy metadata, a spatial temporal fluctuation metadata, or combinations thereof.
(2) The method according to (1) wherein the determining the amount and the duration of the drive modification that may be performed by the target display includes determining an amount and a duration of an overdrive that may be performed by the target display without damaging the at least one light-emitting element, and the performing the power management of the target display includes selectively overdriving the at least one light-emitting element to exceed the manufacturer-determined threshold.
(3) The method according to (1) or (2), wherein the determining the amount and duration of the drive modification that may be performed by the target display includes determining an amount and a duration of an underdrive that may be performed by the target display, in response to the power consumption or the expected power consumption, and the performing the power management of the target display includes reducing a luminance of the at least one light-emitting element.
(4) The method according to any one of (1) to (3), wherein the image data and the power metadata are received together as a coded bitstream.
(5) The method according to (4), further comprising: receiving a first portion of the power metadata in a first frame of the coded bitstream; and storing the first portion of the power metadata in a buffer.
(6) The method according to (5), further comprising: retrieving the first portion of the power metadata from the buffer; and performing the power management of the target display for the image data corresponding to a second frame of the coded bitstream based on the first portion of the power metadata, wherein the second frame is a later image frame compared to the first frame.
(7) The method according to any one of (1) to (6), wherein the image data and the power metadata are received via different transmission paths.
(8) The method according to any one of (1) to (7), wherein the power metadata includes the temporal luminance energy metadata, the method further comprising: deriving a shot luminance metadata from the temporal luminance energy metadata, the shot luminance metadata including information relating to a luminance energy for a shot of the coded bitstream.
(9) The method according to any one of (1) to (8), further comprising: generating a target metadata based on the power metadata, the target metadata including at least one of a first flag data indicating a frame countdown to an overdrive request or a second flag data indicating a frame duration of the overdrive request.
(10) The method according to any one of (1) to (9), wherein performing the power management of the target display includes causing the target display to charge at least one energy storage device associated with the target display.
(11) The method according to any one of (1) to (10), wherein performing the power management of the target display includes causing the target display to discharge at least one energy storage device associated with the target display.
(12) The method according to any one of (1) to (11), further comprising: receiving an image-forming metadata; and controlling the target display to display the image data based on the image-forming metadata.
(13) A non-transitory computer-readable medium storing instructions that, when executed by a processor of a computer, cause the computer to perform operations comprising the method according to any one of (1) to (12).
(14) An apparatus, comprising: a display including at least one light-emitting element; and display management circuitry configured to: receive a power metadata, wherein the power metadata includes information relating to a power consumption or an expected power consumption, determine, based on the power metadata, an amount and a duration of a drive modification that may be performed by the display in response to the power consumption or the expected power consumption, and perform a power management of the display based on the power metadata to modify a driving of the at least one light-emitting element relative to a manufacturer-determined threshold, based on a result of the determining, wherein the power metadata includes at least one of a temporal luminance energy metadata, a spatial luminance energy metadata, a spatial temporal fluctuation metadata, or combinations thereof.
(15) The apparatus according to (14), further comprising a memory configured to store a predetermined configuration file, the predetermined configuration file including information relating to at least one setting parameter of the display.
(16) The apparatus according to (15), wherein the configuration file includes information about at least one of a power consumption specification of the display, a cool-down time of the at least one light-emitting element, a spatial heat transfer of the display, a maximum overdrive duration of the display, or a presence of supercapacitors in the display.
(17) The apparatus according to (15) or (16), wherein the configuration file includes a usage counter indicating information about at least one of an age of the display or a level of wear of the display.
(18) The apparatus according to any one of (15) to (17), further comprising an ambient condition sensor configured to detect an ambient condition, wherein the memory is configured to store information relating to the ambient condition.
(19) The apparatus according to any one of (14) to (18), further comprising: a decoder configured to receive a coded bitstream including an image data and the power metadata, and to provide the power metadata to the display management circuitry.
(20) The apparatus according to (19) wherein: the coded bitstream further includes an image-forming metadata, and the display management circuitry is configured to control the display to modify a display of the image data based on the image-forming metadata.
With regard to the processes, systems, methods, heuristics, etc. described herein, it should be understood that, although the steps of such processes, etc. have been described as occurring according to a certain ordered sequence, such processes could be practiced with the described steps performed in an order other than the order described herein. It further should be understood that certain steps could be performed simultaneously, that other steps could be added, or that certain steps described herein could be omitted. In other words, the descriptions of processes herein are provided for the purpose of illustrating certain embodiments, and should in no way be construed so as to limit the claims.
Accordingly, it is to be understood that the above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent upon reading the above description. The scope should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in the technologies discussed herein, and that the disclosed systems and methods will be incorporated into such future embodiments. In sum, it should be understood that the application is capable of modification and variation.
All terms used in the claims are intended to be given their broadest reasonable constructions and their ordinary meanings as understood by those knowledgeable in the technologies described herein unless an explicit indication to the contrary in made herein. In particular, use of the singular articles such as “a,” “the,” “said,” etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments incorporate more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Number | Date | Country | Kind |
---|---|---|---|
20171001.9 | Apr 2020 | WO | international |
This application claims priority of the following priority applications: U.S. provisional application 63/004,019, filed 2 Apr. 2020 and EP application 20171001.9, filed 23 Apr. 2020, each of which is incorporation by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/025454 | 4/1/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63004019 | Apr 2020 | US |