This application claims the benefit of priority to Japanese Patent Application Number 2023-078370 filed on May 11, 2023. The entire contents of the above-identified application are hereby incorporated by reference.
The disclosure relates to an image processing device, a display device, and a control method of an image processing device.
In recent years, as disclosed in WO 2007/040139, an image processing device has been developed in which an image is displayed on a display panel unit obtained by overlapping two liquid crystal display panels having different resolutions to each other. The display panel unit includes a backlight including a plurality of light-emitting elements, a first panel (monocell) including a plurality of cells, and a second panel (main cell or color cell) including a plurality of pixels. The first panel faces the backlight and controls a transmission amount of light at a first resolution. The second panel faces the first panel and controls a transmission amount of light at a second resolution higher than the first resolution.
According to the technique disclosed in WO 2007/040139, in a region inside a certain cell of the first panel, when an image smaller in size than the certain cell and having extremely higher luminance than peripheral images moves, control for adjusting actual luminance of peripheral cells surrounding the certain cell is not performed. Thus, for example, inconvenience such as occurrence of flicker occurs when the image having the extremely higher luminance than the luminance of the peripheral images crosses a boundary line between the certain cell and the peripheral cells.
The disclosure has been made in view of the problem described above. An object of the disclosure is to provide an image processing device that performs, in a region inside a certain cell, when an image smaller in size than the certain cell and having higher luminance than peripheral images moves, control to adjust luminance of peripheral cells surrounding the certain cell, a display device, and a control method of an image processing device.
(1) An image processing device according to techniques described in the present application is an image processing device for displaying an image on a display panel unit, the display unit including a backlight, a first panel facing the backlight and capable of controlling a transmission amount of light at a first resolution, and a second panel facing the first panel and capable of controlling a transmission amount of light at a second resolution higher than the first resolution, the first panel including a first cell and a plurality of second cells surrounding the first cell, and the second panel including a plurality of first pixels at positions facing the first cell and a plurality of second pixels at positions facing the plurality of second cells, the image processing device including: a first data generation unit configured to generate first data configured to control the first panel based on input image data; and a second data generation unit configured to generate second data configured to control the second panel based on the input image data and the first data, and wherein the first data generation unit generates the first data for the first cell based on input luminance of the plurality of first pixels specified by the input image data and input luminance and positions of the plurality of second pixels specified by the input image data.
(2) In addition to (1), in the image processing device, the first data generation unit may generate the first data so as to suppress occurrence of flicker.
(3) In addition to (1), in the image processing device, the first data generation unit may generate the first data for the first cell based on a distance between a predetermined position inside the first cell and a specific position inside each of the plurality of second pixels and input luminance of the plurality of second pixels.
(4) In addition to (3), in the image processing device, the first data generation unit may generate the first data such that a degree of influence of input luminance of a second pixel having a relatively large distance among the plurality of second pixels on the luminance of the first cell is smaller than a degree of influence of input luminance of a second pixel having a relatively small distance among the plurality of second pixels on the luminance of the first cell.
(5) In addition to (4), in the image processing device, the first data generation unit may generate the first data such that a degree of influence of the input luminance of a second pixel having relatively large input luminance among the plurality of second pixels on the luminance of the first cell is larger than a degree of influence of the input luminance of a second pixel having relatively small input luminance among the plurality of second pixels on the luminance of the first cell.
(6) In addition to (1), in the image processing device, the first data generation unit may include a representative value setting unit configured to set a representative value of each of the plurality of second cells based on input luminance of the plurality of second pixels at positions facing each of the plurality of second cells, a luminance center of gravity calculation unit configured to calculate a luminance center of gravity of each of the plurality of second cells based on the input luminance and the positions of the plurality of second pixels, a filter calculation unit configured to calculate the filter coefficients of the two dimensional filter for the peripheral cells surrounding the center cell so as to include the first cell when one of the plurality of second cells is set as the center cell based on the luminance center of gravity for each of the plurality of second pixels, and a filter processing unit configured to generate the first data for the first cell by performing filter processing on the plurality of peripheral cells by using the coefficients of the two dimensional filter with the representative value as input luminance of the center cell of the two dimensional filter for each of the plurality of second cells.
(7) In addition to (6), in the image processing device, the filter calculation unit may calculate the filter coefficients of the two dimensional filter by performing correction to increase the filter coefficients of the two dimensional filter for the plurality of peripheral cells present on a side of the luminance center of gravity with respect to the representative position of the center cell, and may calculate the filter coefficients of the two dimensional filter by performing correction to decrease the filter coefficients of the two dimensional filter for the plurality of peripheral cells present on an opposite side of the luminance center of gravity with respect to the representative position of the center cell.
(8) In addition to (7), in the image processing device, the filter calculation unit may calculate post-correction filter coefficients of the two dimensional filter by correcting the filter coefficients such that a change amount due to the correction of the filter coefficients increases as a distance between the representative position of the center cell and the luminance center of gravity increases.
(9) In addition to any one of (6) to (8), in the image processing device, the two dimensional filter may be a low pass filter.
(10) In addition to any one of (1) to (9), in the image processing device, the backlight may include a plurality of light-emitting regions capable of adjusting a light emission amount, the image processing device may include a backlight data generation unit configured to generate backlight data for controlling a light emission amount of each of the plurality of light-emitting regions based on the input image data, and a first panel luminance distribution calculation unit configured to calculate a luminance distribution at a position of the second panel with respect to light traveling from the first panel to the second panel based on the backlight data and the first data, and the second data generation unit may generate the second data based on the input image data and the luminance distribution.
(11) In addition to any one of (1) to (10), in the image processing device, in a front view of the display panel unit, a shape of each of the first cell and the plurality of second cells may be different from a shape of each of the plurality of first pixels and the plurality of second pixels.
(12) In addition to any one of (1) to (11), in the image processing device, in a front view of the display panel unit, a part of one cell of the first cell and the plurality of second cells and a part of an adjacent cell adjacent to the one cell may be mixed in a common region.
(13) In addition to any one of (1) to (12), in the image processing device, the first panel may be a liquid crystal panel.
(14) In addition to any one of (1) to (13), in the image processing device, the second panel may be a liquid crystal panel.
(15) A display device according to techniques described in the present application includes the display panel unit and the image processing device according to any one of (1) to (14).
(16) A control method of an image processing device according to techniques described in the present application is a control method of an image processing device for displaying an image on a display panel unit, the display panel unit including a backlight; a first panel facing the backlight and capable of controlling a transmission amount of light at a first resolution; and a second panel facing the first panel and capable of controlling a transmission amount of light at a second resolution higher than the first resolution, the first panel including a first cell and a plurality of second cells surrounding the first cell, the second panel including a plurality of first pixels at positions facing the first cells and a plurality of second pixels at positions facing the plurality of second cells, the control method of the image processing device including: generating first data configured to control the first panel based on input image data; and generating second data configured to control the second panel based on the input image data and the first data, wherein the generating the first data generates the first data for the first cell based on input luminance of the plurality of first pixels specified by the input image data and input luminance and positions of the plurality of second pixels specified by the input image data.
The disclosure will be described with reference to the accompanying drawings, wherein like numbers reference like elements.
Hereinafter, an image processing device of embodiments according to the disclosure will be described with reference to the accompanying drawings. Further, in the drawings, the same or equivalent elements are denoted by the same reference numerals and signs, and repeated descriptions thereof will be omitted.
The display device 1 includes a display panel unit 100 and an image processing device 10 that controls the display panel unit 100 as illustrated in
The display panel unit 100 includes a backlight BL, a backlight drive unit 40, a first panel WB, a first panel drive unit 20, a second panel CL, and a second panel drive unit 30. Each of the first panel WB and the second panel CL is a liquid crystal panel in the present embodiment, but may be a panel other than the liquid crystal panel.
The backlight BL is disposed to face the first panel WB (see
The plurality of LEDs are controlled such that light emission aspects of the plurality of LEDs in each of the light-emitting regions LER are identical and thus one entire light-emitting region LER emits light uniformly to some extent. Then, local dimming for independently controlling a light emission amount by each of the plurality of light-emitting regions LER may be performed. However, in the description of the present embodiment, the local dimming is not executed.
The backlight drive unit 40 drives each of the plurality of light-emitting regions LER constituting the backlight BL to realize output of each of the plurality of light-emitting regions LER specified with backlight data generated by the image processing device 10.
The first panel WB faces the backlight BL and is a liquid crystal display panel capable of controlling a transmission amount of light at a first resolution. The first panel WB is referred to as a monochrome panel (hereinafter, also referred to as a “monocell”) capable of performing black-and-white display (see
Each of the plurality of cells CE has no color filter. Each of the plurality of cells CE functions as an opening for adjusting a transmission amount of light emitted by the backlight BL. The area of the opening of the cell CE is variable. The first panel WB is disposed to face the second panel CL (see
The first panel drive unit (hereinafter, also referred to as a “monocell drive unit”) 20 drives a liquid crystal layer of each of the plurality of cells CE constituting the first panel WB so as to realize an aperture ratio of each of the plurality of picture elements PE specified with the data generated by the image processing device 10. Note that in the present specification, the aperture ratio of the cell CE means an actual opening area of the cell CE with respect to a maximum opening area of the cell CE.
The second panel CL (see
Each of the plurality of pixels PX includes a plurality of subpixels. In the present specification, a subpixel is referred to as a “picture element PE” (see
The second panel CL may be any panel other than the liquid crystal display panel as long as it can control the transmittance of light for each of the picture element PE (R), the picture element PE (G), and the picture element PE (B) (see
Further, a combination of the color filters of the plurality of picture elements PE constituting one pixel PX of the second panel CL is not limited to the combination of red, green, and blue, and may be, for example, a combination of yellow, magenta, and cyan. The resolution for each color of the plurality of picture elements PE constituting the second panel CL is, for example, 1920×1080. That is, the resolution for the plurality of pixels PX constituting the second panel CL is, for example, 1920×1080.
The second panel drive unit (hereinafter, also referred to as a “main cell drive unit”) 30 drives a liquid crystal layer of each of the plurality of picture elements PE constituting the second panel CL so as to realize an aperture ratio of each of the plurality of picture elements PE specified with the data generated by the image processing device 10. Note that in the present specification, the aperture ratio of the picture element PE means an actual aperture area of the picture element PE with respect to a maximum aperture area of the picture element PE.
The image processing device 10 controls the display panel based on a predetermined control method, and causes the display panel unit 100 to display an image based on input image data input from the outside. In the present embodiment, the resolution of the input image data is the same as the resolution of the plurality of pixels PX, which is 1920×1080. The input image data is data with which a plurality of input gray scale values each input to the plurality of picture elements PE of the second panel CL can be specified.
In addition, the input image data is data with which the input image can be specified with the plurality of input gray scale values. The input image specified with the input image data corresponds to an output image displayed on the display panel unit 100. When the resolution of the input image data is different from the resolution of the plurality of pixels PX, for example, a resolution conversion unit that converts the resolution of the input image data into the resolution of the plurality of picture elements PE may be provided before the first data generation unit 11.
The image processing device 10 includes the first data generation unit 11 and a second data generation unit 12 (hereinafter also referred to as a “main cell drive value calculation unit”). In the present embodiment, each of the first data generation unit 11 and the second data generation unit 12 is realized by at least a part of the function of a processor. However, at least one of the first data generation unit 11 and the second data generation unit 12 may be configured by an electronic circuit dedicated to image processing according to the present embodiment.
The input image data is transmitted from the outside of the display device 1 to the image processing device 10. The input image data, that is, an input gray scale value of each of the plurality of picture elements PE constituting the second panel CL, is transmitted to each of the first data generation unit 11 and the second data generation unit 12 inside the image processing device 10.
The first data generation unit 11 generates the first data for controlling the aperture ratios of the plurality of cells CE based on the input image data. The first data is, for example, data corresponding to a resolution of 240×135 that is the resolution of the first panel WB. The first data generation unit 11 uses the input image data to generate the aperture ratio of each of the plurality of cells CE constituting the first panel WB.
The second data generation unit 12 generates the second data for controlling the aperture ratios of the plurality of pixels PX based on the input image data and the first data. The second data is, for example, data corresponding to a resolution of 1920×1080 that is the resolution of the second panel CL. The second data generation unit 12 uses the input image data and the first data to generate the aperture ratio of each of the plurality of pixels PX constituting the second panel CL.
First, the second data generation unit 12 uses the first data (drive value of the cell CE) to calculate luminance distribution of light passing through the cells CE from the backlight BL and reaching the pixels PX. That is, the second data generation unit 12 calculates luminance distribution at a position of the main cell, that is, the second panel CL. Thereafter, the second data generation unit 12 corrects the input image data by using the calculated luminance distribution so as to compensate for the lack of luminance caused by adjusting the amount of light traveling from the backlight BL to each picture element PE by controlling the aperture ratio of each cell CE of the first panel. Thereby, the second data (drive value of the pixel PX) is generated.
However, in the present embodiment, the plurality of light-emitting regions LER are controlled so that all the plurality of light-emitting regions LER have the same luminance. That is, the image processing device 10 and the backlight BL have capability of performing the local dimming, but do not perform the local dimming in the present embodiment. Note that the image processing device 10 and the backlight BL according to the present embodiment need not have the capability of performing the local dimming.
As can be seen from a comparison between
Each of the light-emitting regions LER of the backlight BL is controlled by the image processing device 10 so as to realize the luminance at the maximum value of the input gray scale values of several picture elements PE in one virtual region facing the light-emitting regions LER that can be specified with the input image data, for example.
However, in the present embodiment, the image processing device 10 controls all the plurality of light-emitting regions LER of the backlight BL to emit light at the same luminance. That is, the local dimming is not performed. In other words, in the present embodiment, a light emission amount of each of the plurality of light-emitting regions LER of the backlight BL specified by the input image data is the same.
Each of the cells CE of the first panel WB is controlled by the image processing device 10 so as to realize the luminance having the maximum value of the input gray scale values of several picture elements PE in one virtual region facing the cell CE that can be specified with the input image data, for example. The second panel CL is controlled by the image processing device 10 so as to realize luminance of each of the plurality of picture elements PE that can be specified with the input image data.
As illustrated in
When the first panel WB includes k cells CE, the image processing device 10 performs processing for a case where each of the k cells CE serves as the first cell CE. Thus, a certain cell CE may serve as the first cell CE or the second cell CE. In addition, when the cell CE located at an end portion of the first panel WB serves as the first cell, the first cell CE may not be able to be disposed at the center position of the matrix.
The second panel CL includes the plurality of first pixels PX at a position facing the first cell CE and the plurality of second pixels PX at a position facing each of the plurality of second cells CE. That is, one cell CE at the center position of one set of cells CE formed of the 3×3 matrix faces the plurality of first pixels PX, and each of the eight cells CE at peripheral positions surrounding the cell CE at the center position faces the plurality of second pixels PX.
In
The first data generation unit 11 generates the first data for controlling the first panel WB based on the input image data. The first data generation unit 11 includes a distance calculation unit 11A and a first data calculation unit 11B. Details of each of the distance calculation unit 11A and the first data calculation unit (hereinafter also referred to as a “monocell drive value calculation unit”) 11B will be described later.
The first data generation unit 11 generates first data for the first cell CE based on the input luminance of the plurality of first pixels PX specified by the input image data and the input luminance and positions of the plurality of second pixels PX specified by the input image data.
Specifically, the distance calculation unit 11A calculates a distance D1 between a specific position of each pixel PX and a predetermined position of the cell CE. At this time, the distance calculation unit 11A calculates the distance D1 based on the cell CE and the pixel PX of the second panel (main cell) CL, which are used for the distance calculation. A point serving as a reference of the cell CE is a center position of the cell, specifically, an intersection point of diagonal lines of a rectangle.
When the processing of calculating the distance D1 is performed with the pixel PX as the reference, the cells CE used for the calculation of the distance D1 for a certain pixel PX desirably include all the cells CE included in the first panel WB. However, from the viewpoint of suppressing an increase in time and processing for the calculation of the distance D1, the cells CE used for the calculation of the distance D1 for the certain pixel PX may be limited to the cells CE in a predetermined region in the first panel WB, including some of the cells CE of all the cells CE.
On the other hand, when the processing of calculating the distance D1 is performed with the cell CE as the reference, the pixels PX used to calculate the distance D1 for a certain cell CE desirably include all the pixels PX included in the second panel CL. However, from the viewpoint of suppressing an increase in time and processing for the calculation of the distance D1, the pixels PX used for the calculation of the distance DI for the certain cell CE may be limited to the pixels PX in a predetermined region in the second panel CL including some of the pixels PX of all the pixels PX.
The first data calculation unit (monocell drive value calculation unit) 11B calculates a drive value of the cell CE according to the distance D1. As a method of this calculation, for example, the following method is conceivable. When the distance between the specific position of the certain pixel PX and the center position of the cell CE is D1, it is assumed that the drive value=(k/d)×(gray scale value (luminance) of the certain pixel PX) or the drive value=(k/d{circumflex over ( )}2)×(gray scale value (luminance) of the certain pixel PX).
However, when the certain pixel PX is included in a region facing the certain cell CE, it is assumed that the drive value=the gray scale value (luminance) of the certain pixel PX. In addition, it is assumed that the value of the constant k can be changed by a mechanism (such as a register) that can be changed from the outside.
Since the plurality of pixels PX are present as calculation targets of the drive values for one cell CE, the first data calculation unit (monocell drive value calculation unit) 11B calculates a plurality of the drive values for the one cell CE. The first data calculation unit (monocell drive value calculation unit) 11B sets the largest value among the plurality of drive values as the drive value of the one cell CE.
According to this, in a region inside the one cell CE, when an image smaller in size than the one cell CE and having higher luminance than peripheral images moves, control for adjusting the luminance of a plurality of peripheral cells CE surrounding the one cell CE can be performed.
In the present embodiment, the first data generation unit 11 generates the first data so as to suppress occurrence of flicker. Thus, in the region inside the one cell CE, when the image smaller in size than the one cell CE and having the extremely higher luminance than peripheral images moves, the occurrence of flicker can be suppressed.
Specifically, the first data generation unit 11 generates the first data for the first cell CE based on the distance D1 between a predetermined position (CN) inside the first cell CE and a specific position SC (center position) inside each of the plurality of second pixels PX and input luminance of the plurality of second pixels PX. In the present embodiment, the predetermined position inside the first cell CE is an intersection point of diagonal lines of the rectangular cell CE, that is, the center position CN, but is not limited thereto. The specific position SC is an intersection point of diagonal lines of the rectangular second pixel PX, that is, a center position, but is not limited thereto.
According to the first data described above, the following (1) is realized. (1) The degree of influence of the input luminance of the second pixel PX having a relatively large distance D1 among the plurality of second pixels PX on the luminance of the first cell CE is smaller than the degree of influence of the input luminance of the second pixel PX having a relatively small distance D1 among the plurality of second pixels PX on the luminance of the first cell CE.
Further, according to the first data, the following (2) is realized. (2) The degree of influence of the input luminance of the second pixel PX having relatively large input luminance among the plurality of second pixels PX on the luminance of the first cell CE is larger than the degree of influence of the input luminance of the second pixel PX having relatively small input luminance among the plurality of second pixels PX on the luminance of the first cell CE.
As illustrated in
The input image memory M1 is a memory for storing the input image data for one frame. The monocell data calculation memory M2 is a memory for storing calculated values of the monocells for one frame. The monocell drive value memory M3 is a memory for storing the calculated monocell drive values for one frame.
The main cell drive value memory M4 is a memory for storing the calculated main cell drive values for one frame. Each of the input image memory M1, the monocell data calculation memory M2, the monocell drive memory M3, and the main cell drive value memory M4 is initialized to 0 before the start of processing described with reference to the flowchart shown in
In step S1, the image processing device 10 reads the input image data for one frame from an external device. The input image data is stored in the input image memory M1. In step S2, the image processing device 10 sets a pixel PX of a first calculation target of a group of pixels PX included in the input image data. For example, the image processing device 10 sets the leftmost and uppermost pixel PX in the input image data as the first calculation target.
In step S3, the distance calculation unit 11A calculates the distance between the pixel PX of the calculation target and the monocell, specifically, the distance D1 (see
At this time, the first data calculation unit 11B calculates the drive value so as to increase the drive value (the value corresponding to the luminance) when the distance D1 is small, and decrease the drive value (the value corresponding to the luminance) when the distance D1 is large.
In step S4, the first data calculation unit (monocell drive value calculation unit) 11B calculates the drive value (value corresponding to luminance of the monocell corresponding to the distance D1) for each of the plurality of monocells (each cell CE of the first panel WB). The plurality of calculated drive values are stored in the monocell data calculation memory M2.
In step S5, the first data calculation unit (monocell drive value calculation unit) 11B compares the following (1) and (2) for each of the plurality of monocells.
(1) A drive value (corresponding to luminance) corresponding to the distance D1 between the specific position SC of the pixel PX of a previous calculation target and the predetermined position (center position CN) of specific one monocell (cell CE of the first panel WB), which is stored in the monocell data calculation memory M2.
(2) A drive value (corresponding to luminance) corresponding to the distance D1 between the specific position SC of the pixel PX of a calculation target this time and the predetermined position (center position CN) of the specific one monocell (cell CE of the first panel WB).
As a result, the first data calculation unit (monocell drive value calculation unit) 11B selects a larger drive value (corresponding to luminance) for each of the plurality of monocells as a result of the comparison between (1) and (2) described above. The selected drive value is overwritten in the monocell data calculation memory M2.
In step S6, the image processing device 10 determines whether the processing in steps S3 to S5 for all the pixels PX included in the input image data has been completed. In step S6, it may not be determined that the processing in steps S3 to S5 for all the pixels PX included in the input image data has been completed. In this case, in step S7, the image processing device 10 changes the pixel PX of the calculation target in the input image data to the next pixel PX, and repeats steps S3 to S6. For example, in step S7, the image processing device 10 sets a pixel PX one pixel to the right of the pixel PX currently set as the calculation target as a pixel PX of a new calculation target. When the pixel PX currently set as the calculation target is the pixel PX at the right end, the image processing device 10 sets a pixel PX at the left end in a row one below the input image data as the pixel PX of the new calculation target.
On the other hand, in step S6, it may be determined that the calculations in steps S3 to S5 for all the pixels PX included in the input image data have been completed. For example, this is a case where the pixel PX currently set as the calculation target is the rightmost and lowermost pixel PX in the input image data. In this case, in step S8, the first data calculation unit (monocell drive value calculation unit) 11B determines the drive value (value corresponding to luminance) of the first panel (monocell) WB.
At this time, the first data calculation unit (monocell drive value calculation unit) 11B adjusts the drive values of all the cells CE of the first panel WB by multiplying the drive values of the plurality of cells CE stored in the monocell data calculation memory M2 by a necessary coefficient or by adding an offset to the drive values. Then, the first data calculation unit (monocell drive value calculation unit) 11B stores the adjusted drive values in the monocell drive value memory M3.
In step S10-2, the first panel drive unit (monocell drive unit) 20 drives the monocell, that is, the first panel WB by using the drive values stored in the monocell drive value memory M3.
In addition, in step S9, the second data generation unit 12 calculates luminance distribution of light passing through the cells CE from the backlight BL and reaching the pixels PX by using the drive values of the cells CE stored in the monocell drive value memory M3. That is, the second data generation unit 12 calculates the luminance distribution at the position of the main cell, that is, the second panel CL.
Thereafter, in step S10, the second data generation unit 12 corrects the input image data by using the calculated luminance distribution so as to compensate for the lack of luminance caused by limiting light traveling from the backlight BL to the pixels PX by the cells CE. The second data generation unit 12 stores the corrected input image data (corresponding to luminance) in the main cell drive value memory M4.
Thereafter, in step S10-1, the second panel drive unit (main cell drive unit) 30 drives the main cell, i.e., the second panel CL by using the corrected input image data stored in the main cell drive value memory M4.
In steps S2 to S7, the image processing device 10 sets the pixel PX of the calculation target from the input image data, and calculates the drive value of the cell CE while sequentially changing the pixel PX of the calculation target. However, the cell CE of the calculation target may be set from all the cells CE, and the drive value of the cell CE may be calculated while sequentially changing the cell CE of the calculation target. The same also applies to subsequent embodiments.
The distance calculation unit 11A preferably performs the processing of steps S3 to S5 for all the cells CE. However, doing so increases computational cost. There is a low possibility that the pixel PX of the calculation target affects the cell CE at a position far from the pixel PX of the calculation target. Thus, for example, the distance calculation unit 11A may perform the processing of steps S3 to S5 on the cells CE within a predetermined range from the pixel PX of the calculation target. The same applies to subsequent other embodiments.
In the flowchart shown in
As can be seen from a comparison between
With this configuration, the same effects as those obtained by the image processing device 10 of the first embodiment can be obtained by a memory having a capacity smaller than that of the image processing device 10 according to the first embodiment.
In step S11, the image processing device 10 reads the input image data for one frame from the external device. The input image data is stored in the input image memory M1.
In step S12, the image processing device 10 sets a line (row) of the calculation target of the monocell (first panel WB) and a line memory to be used. Specifically, the distance calculation unit 11A sets the line (row) of the calculation target of the first panel WB (monocell). For example, first, the uppermost line of the first panel WB (monocell) is set as the line of the calculation target, and thereafter, the lines below the uppermost line are sequentially set as the lines of the calculation targets. Further, the distance calculation unit 11A sets one of the monocell data calculation line memories M2-1 and M2-2 to be used for the subsequent calculation. At this time, two line memories are alternately used.
In step S13, the image processing device 10 sets a calculation target range in the input image data. The setting of the calculation target range will be described in detail later. Then, the image processing device 10 sets the pixel PX of the first calculation target in the calculation target range of the input image data. For example, the image processing device 10 sets the leftmost and uppermost pixel PX in the calculation target range of the input image data as the first calculation target. The pixel PX of the calculation target is set from the upper left pixel PX in the input image data in order in the right direction, and when the right end of the input image data is reached, the pixel PX at the left end of one line below the input image data is set as the pixel PX of the calculation target.
A point different from step S2 of the first embodiment is that in the processing of steps S13 to S18 of the present embodiment, not all the input image data but some of the pixels PX are set as the calculation targets. More details will be described below.
In the second example of the image processing device 10 according to the present embodiment, in the processing of steps S13 to S18, data is updated in the calculation of the distance D1 only for one line (row) of the group of cells CE constituting the first panel (monocell) WB. Thus, the processing of steps S13 to S18 is performed only on the pixels PX in the input image data whose cells CE for one line are used to calculate the distance D1 (that is, the pixels PX included in the calculation target range of the input image data). The processing of step 13 is to set the calculation target range of the input image data. For example, in step 13, the image processing device 10 sets a plurality of lines in the input image data whose distance in the direction perpendicular to the line is within a predetermined range with respect to one line of the calculation target of the first panel (monocell) WB determined in step S12 as the calculation target range. According to this processing, when the one line of the calculation target of the first panel (monocell) WB determined in step S12 changes, the calculation target range of the input image data to be determined in step S13 also changes.
In step S14, the distance calculation unit 11A calculates the distance D1 between the specific position SC (center position) of the pixel PX of the calculation target and the predetermined positions (center positions CN) of the plurality of monocells (first panel WB) included in the line of the calculation target. That is, the distance calculation unit 11A calculates the distance D1 between one pixel PX of the calculation target and each of the plurality of cells CE. In step S15, the first data calculation unit (monocell drive value calculation unit) 11B calculates the drive value (value corresponding to luminance) of the monocell (each cell CE of the first panel WB) corresponding to the distance D1.
In step S16, the first data calculation unit (monocell drive value calculation unit) 11B compares the following (1) and (2) for each of the plurality of monocells.
(1) A drive value (corresponding to luminance) corresponding to the distance D1 between the pixel PX of the previous calculation target and the specific one monocell (first panel WB), which is stored in the monocell calculation line memory (one of the M2-1 and M2-2) currently used for the calculation.
(2) A drive value (corresponding to luminance) corresponding to the distance D1 between the pixel PX of the calculation target this time and the specific one monocell (first panel WB).
As a result, the first data calculation unit (monocell drive value calculation unit) 11B selects a larger drive value (corresponding to luminance) as a result of the comparison for each of the plurality of monocells that has been the calculation target. The selected drive value is overwritten in the monocell data calculation line memory (one of the M2-1 and M2-2) currently used for the calculation.
Next, in step S17, the image processing device 10 determines whether the processing in steps S14 to S16 for all the pixels PX included in the calculation target range of the input image data has been completed. In step S17, it may not be determined that the processing in steps S14 to S16 for all the pixels PX included in the calculation target range of the input image data has been completed. In this case, in step S18, the image processing device 10 changes the pixel PX of the calculation target in the input image data to the pixel PX of the next calculation target, and then repeats steps S14 to S17.
For example, in step S17, the image processing device 10 sets a pixel PX one pixel to the right of the pixel PX currently set as the calculation target as a pixel PX of a new calculation target. When the pixel PX currently set as the calculation target is the pixel PX at the right end, the pixel PX at the left end in a row one below the input image data is set as the pixel PX of the new calculation target.
On the other hand, in step S17, it may be determined that the processing in steps S14 to S16 for all the pixels PX included in the calculation target range in the input image data has been completed. This means that the processing related to the calculation target line of the monocell has been completed and the drive value of each monocell included in the calculation target line has been calculated. In this case, in step S19, the second data generation unit 12 determines whether the drive values (corresponding to luminance) of the cells CE of all the lines of the monocell (first panel WB) have been determined.
As a result, in step S19, it may not be determined that the drive values (corresponding to luminance) of the cells CE of all the lines of all the monocells (first panel WB) have been determined. In this case, in step S20, the image processing device 10 changes the line of the calculation target of the monocell (first panel WB) to a line of a next calculation target. In this case, two line memories are alternately used. Thereafter, in step S21, the image processing device 10 sets storage information of the line memory to be used next to 0, that is, initializes the storage information, and repeats steps S13 to S19.
As shown in
In step S32, the image processing device 10 determines whether the calculation of the line of the calculation target of the monocell in the processing of the flowchart 1 has been completed. Specifically, the image processing device 10 determines whether the determination result of step S17 of the flowchart 1 is Yes. In step S32, if the calculation of the line of the calculation target of the matrix of the cells CE constituting the monocell (first panel WB) has not been completed (specifically, if the determination result of step S17 of the flowchart 1 is not Yes), then the image processing device 10 repeats the processing of step S32.
On the other hand, in step S32, the calculation of the line of the calculation target of the monocell in the processing of the flowchart 1 may have been completed (specifically, a case where the determination in step S17 of the flowchart 1 is Yes). In this case, in step S33, the first data calculation unit (monocell drive value calculation unit) 11B determines the drive value (value corresponding to the luminance) of the line of the calculation target of the first panel (monocell) WB. At this time, the first data calculation unit (monocell drive value calculation unit) 11B multiplies the drive values of the plurality of cells CE stored in the monocell data calculation line memories by a necessary coefficient or adds an offset to the drive values. As a result, the first data calculation unit (monocell drive value calculation unit) 11B adjusts the drive value of the line of the calculation target of the first panel (monocell) WB. Then, the first data calculation unit (monocell drive value calculation unit) 11B stores the adjusted drive value in a region corresponding to the line of the calculation target in the monocell drive value memory M3.
In step S34, the image processing device 10 determines whether calculation of all the lines of the matrix of the cells CE constituting the monocell (first panel WB) of the line of the calculation target has been completed.
In step S34, the processing of all the lines of the monocell (cell CE of the first panel WB) of the line of the calculation target may not have been completed. In this case, in step S35, the image processing device 10 changes the line of the calculation target of the matrix of the cells CE constituting the monocell (first panel WB) to the next line. On the other hand, in step S34, if the calculation of all the lines of the calculation target of the matrix of the cells CE constituting the monocell (first panel WB) has been completed, then the image processing device 10 ends the processing.
In step S41, the second data generation unit 12 calculates the luminance distribution on the main cell (pixel PX of the second panel CL) in the same manner as the processing in step 9. At the same time, in step S44, the first panel drive unit (monocell drive unit) 20 drives the monocell (first panel WB) by using the drive values stored in the monocell drive value memory M3.
In step S42, the second data generation unit 12 corrects the input image data read from the external device by using the calculated luminance distribution described above, and stores the corrected input image data in the main cell drive value memory M4, in the same manner as the processing in step S10. Thereafter, in step S43, the second panel drive unit (main cell drive unit) 30 drives the main cell (second panel CL) by using the gray scale values of the corrected input image data stored in the main cell drive value memory M4.
According to the above processing, when the first data calculation unit (monocell drive value calculation unit) 11B performs the processing of step S33 on the n-th line of the monocell in the processing of flowchart 2, the image processing device 10 determines (S20) the next calculation target line (n+1-th line) of the monocell in the processing of flowchart 1. As a result, the image processing device 10 executes processing related to the calculation target line. By performing such processing, the processing of the flowchart 1 and the processing of the flowchart 2 are executed in parallel.
Further, the image processing device 10 may sequentially perform the processing of the flowchart 3 from a step where the drive values of the predetermined amount of monocells necessary for the processing of step S41 and step S42 are determined by the processing of the flowcharts 1 and 2. By doing so, the processing of the flowchart 3 is executed in parallel with the processing of the flowcharts 1 and 2.
As described above, the image processing device 10 can calculate the monocell data by using the monocell data calculation memory M2 (
In
In
As can be seen from
As illustrated in
As can be seen from
As shown in
As described above, according to the image processing device 10 of the present embodiment, the aperture ratio of each cell CE continuously changes even when the high luminance region BA in the input image data IM moves in a region inside the one cell CE or moves across the plurality of cells CE. Thus, flicker can be suppressed. As another means for continuously changing the aperture ratio of each cell CE, it is conceivable to perform filter processing in the time direction. However, the processing in the image processing device 10 of the present embodiment is processing that is completed for each frame of the input image data, and is not processing that requires a plurality of continuous frames of the input image data. Thus, according to the image processing device 10 of the present embodiment, resources such as a memory required for the processing can be reduced and a delay time caused by the processing can be shortened.
The image processing device 10 according to a second embodiment will be described with reference to
As illustrated in
The representative value setting unit 11C sets a representative value of each of the plurality of second cells CE based on the input luminance of the plurality of second pixels PX at a position facing each of the plurality of second cells CE. Specifically, the representative value setting unit 11C uses, for example, the maximum value among luminance values of the plurality of pixels PX in the region facing one cell CE as the representative value of the cell CE. In addition, the luminance center of gravity calculation unit 11D calculates the luminance center of gravity of each of the plurality of second cells CE based on the input luminance and the positions of the plurality of second pixels PX.
The filter calculation unit 11E calculates a filter coefficient for each of the plurality of second cells CE. That is, when one of the plurality of second cells CE is set as a center cell CE, the filter calculation unit 11E calculates the filter coefficients of the two dimensional filter for the center cell CE and a plurality of, for example, eight peripheral cells CE surrounding the center cell CE so as to include the first cell CE, that is, a total of nine cells CE based on the luminance center of gravity of the center cell CE. In this case, the filter calculation unit 11E provides a bias to the filter coefficients constituting the matrix based on the calculation result of the luminance center of gravity. The filter before providing the bias is, for example, a low pass filter such as a Gaussian filter or a smoothing filter.
For each of the plurality of second cells CE, the filter processing unit 11F performs filter processing on the representative values of the plurality of cells CE with the second cell CE as the center cell by using the filter coefficients calculated by the filter calculation unit 11E. Thereby, the first data for the first cell CE, that is. the drive value of the cell CE, is generated. That is, the filter processing unit 11F performs the filter processing on the representative values of the plurality of cells CE constituting the first panel WB, thereby performing blur processing on the representative values of the plurality of cells CE. Note that, as in the first embodiment, the certain cell CE may serve as the first cell CE or the second cell CE. Thus, the processing performed by the representative value setting unit 11C, the luminance center of gravity calculation unit 11D, the filter calculation unit 11E, and the filter processing unit 11F are performed on all the cells CE as a result.
According to the image processing device 10 as well, in a region inside the certain cell CE when, an image smaller in size than the certain cell CE and having higher luminance than peripheral images moves, control for adjusting the actual luminance of the peripheral cells CE surrounding the certain cell CE can be performed. As a result, in a region inside the certain cell CE, when the image IM smaller in size than the certain cell CE and having higher luminance than peripheral images moves, the image processing device 10 can be suppress the occurrence of flicker.
As illustrated in
The input image memory M1 is a memory for storing the input image data for one frame. The monocell representative value memory M40 is a memory for storing the representative value (maximum value) of the luminance of the plurality of pixels PX facing the monocells for one frame, that is, the cells CE of the first panel WB. The monocell drive value memory M3 is a memory for storing a plurality of monocell drive values calculated for the plurality of pixels PX for one frame.
The luminance center of gravity calculation memory M5 is a luminance center of gravity calculation memory for each cell CE for one frame. The luminance center of gravity calculation memory M5 stores five values of Equations (3) to (7) described later for one cell CE. The main cell drive value memory M4 is a memory for storing the calculated main cell drive values for one frame, that is, the drive values (luminance) of the pixels PX of the second panel CL.
As illustrated in
The filter calculation unit 11E calculates the filter coefficients of the two dimensional filter by performing correction for the plurality of peripheral cells CE present on a side of the luminance center of gravity LC to increase the filter coefficients of the low pass filter with the representative position RP of the center cell CE as the reference. The representative position RP is, for example, the center position of the cell CE, specifically, the position of the intersection of the diagonal lines of the rectangular center cell CE, that is, the center position CN.
On the other hand, the filter calculation unit 11E performs correction for the plurality of peripheral cells CE present on a side opposite to the side of the luminance center of gravity LC to decrease the filter coefficients of the low pass filter with the representative position RP of the center cell CE as the reference. As a result, post-correction filter coefficients of the two dimensional filter are calculated.
Specifically, the filter calculation unit 11E corrects the coefficients of the low pass filter so that an amount of change of the coefficients of the low pass filter due to the correction increases as the distance D2 between the representative position RP of the center cell CE and the luminance center of gravity LC increases. As a result, the filter coefficients of the two dimensional filter are calculated.
For this reason, the filter coefficients of the two dimensional filter are corrected using the following correction factors Rh+, Rv+, Rh−, and Rv− of the filter coefficient. In a coordinate system specified by an H direction and a V direction, coordinates on the right side of the center position in the X direction are represented by a positive sign, and coordinates on the lower side of the center position in the Y direction are represented by a positive sign.
The correction factor Rh+ is a value for correcting a filter coefficient located on the right side (positive) of the center in the X direction. The correction factor Rh− is a value for correcting a filter coefficient located on the left side (negative) of the center in the X direction. The correction factor Rv+ is a value for correcting a filter coefficient located on the lower side (positive) of the center in the Y direction. The correction factor Rv− is a value for correcting a filter coefficient located on the upper side (negative) of the center in the Y direction.
Each of the correction factors Rh+, Rv+, Rh−, and Rv− is calculated by the following calculation formula.
Correction factors Rh+, Rv+, Rh−, and Rv−=(sign) constant C×distance D2 (Calculation formula)
Note that C is a predetermined value. As described above, the distance D2 is the distance between the center position CN of the center cell CE and the luminance center of gravity LC in the center cell CE.
Thus, the larger the distance D2 between the center position CN of the center cell CE and the luminance center of gravity LC, the larger the absolute value of the correction factor of the filter coefficient, and the smaller the distance D2 between the center position CN of the center cell CE and the luminance center of gravity LC, the smaller the absolute value of the correction factor of the filter coefficient. Further, the filter coefficient of the center cell CE is not affected by the position of the luminance center of gravity of the peripheral cells CE.
The filter calculation unit 11E determines the sign in the calculation formula of the correction factor as follows according to the relationship between the position of the luminance center of gravity LC of the center cell CE and the center position CN of the center cell CE.
When the luminance center of gravity LC of the center cell CE is located in the positive direction of the center position CN of the center cell CE in the X direction, the filter calculation unit 11E sets the sign of Rh+ to + and sets the sign of Rh− to −. When the luminance center of gravity LC of the center cell CE is located in the negative direction of the center position CN of the center cell CE in the X direction, the filter calculation unit 11E sets the sign of Rh+ to − and sets the sign of Rh− to +.
When the luminance center of gravity LC of the center cell CE is located in the positive direction of the center position CN of the center cell CE in the Y direction, the filter calculation unit 11E sets the sign of Rv+ to + and sets the sign of Rv− to −. When the luminance center of gravity LC of the center cell CE is located in the negative direction of the center position CN of the center cell CE in the Y direction, the filter calculation unit 11E sets the sign of Rv+ to − and sets the sign of Rv− to +.
In the upper left peripheral cell matrix UPL, the post-correction filter coefficients are calculated by adding Rh− and Rv− to pre-correction filter coefficients. In the center upper peripheral cell column UPM, the post-correction filter coefficients are calculated by adding Rv− to the pre-correction filter coefficients. In the upper right peripheral cell matrix UPR, the post-correction filter coefficients are calculated by adding Rh+ and Rv− to the pre-correction filter coefficients.
In the left center peripheral cell row LEM, the post-correction filter coefficients are calculated by adding Rh− to the pre-correction filter coefficients. In the right center peripheral cell row RIM, the post-correction filter coefficients are calculated by adding Rh+ to the pre-correction filter coefficients.
In the lower left peripheral cell matrix LOL, the post-correction filter coefficients are calculated by adding Rh− and Rv+ to the pre-correction filter coefficients. In the lower center peripheral cell column LOM, the post-correction filter coefficients are calculated by adding Rv+ to the pre-correction filter coefficients. In the lower right peripheral cell matrix LOR, the post-correction filter coefficients are calculated by adding Rh+ and Rv+ to the pre-correction filter coefficients.
On the other hand, in the center cell CE, the filter coefficient is not corrected.
The above-described filter processing is executed for each of the plurality of cells CE. For example, when there are 144 (=16×9) cells CE, filter calculation and filter processing using “post-correction 7×7 BLUR filter” are performed 144 times.
In the two dimensional filter F illustrated in
As illustrated in
The position of the luminance center of gravity LC of the center cell CE of the two dimensional filter F is located in an obliquely lower right direction of the center position CN (representative position RP) of the center cell CE. Thus, the correction factors are Rh+>0, Rv+>0, Rh−<0, and Rv−<0. With these correction factors, the two dimensional filter F is generally corrected so that the filter coefficients located on the upper left side, that is, in the region L become smaller, and the filter coefficients located on the lower right side, that is, in the region S become larger.
That is, in the two dimensional filter F illustrated in
As shown in
As shown in
It is assumed that there is a region including n pixels PX, an x coordinate of an i-th pixel PX is xI, a y coordinate of the i-th pixel PX is yI, and luminance (gray scale value) of the i-th pixel PX is GI. When an x coordinate of the luminance center of gravity is cx and a y coordinate of the luminance center of gravity is cy, cx and cy are calculated by the following Equations (1) and (2).
The numerator and denominator of Equations (1) and (2) can be expressed in separate Equations as follows.
The luminance center of gravity LC is calculated using the above-described Equations (3), (4), (5), (6), and (7).
As shown in
When the post-correction filter coefficients shown in
In step S51, the image processing device 10 reads the input image data for one frame from the external device. In step S52, the image processing device 10 sets a pixel PX of a first calculation target of the input image data. For example, the image processing device 10 sets the leftmost and uppermost pixel PX in the input image data as the first calculation target. In step S53, the representative value setting unit 11C compares a previous provisional representative value of the monocell (cell CE) including the pixel PX of the calculation targets in the region with the gray scale value of the pixel PX of the calculation target. When the gray scale value of the pixel PX of the calculation target is larger than the previous provisional representative value of the monocell (cell CE) including the pixel PX of the calculation target in the region, the representative value setting unit 11C stores the gray scale value of the pixel PX of the calculation target in the monocell representative value memory M40 as a new provisional representative value of the monocell. When the processing of step S53 is performed on all the pixels PX included in the input image data for one frame, the maximum gray scale value of the pixels PX included in the region of each monocell is selected as the representative value for the monocell.
In step S54, the luminance center of gravity calculation unit 11D performs calculation for calculating the luminance center of gravity LC. Specifically, the luminance center of gravity calculation unit 11D calculates the values related to the above-described Equations (3) to (5) for the pixel PX of the calculation target. As a result, the luminance center of gravity calculation unit 11D adds the values related to the above-described Equations (3) to (5) to values of ctx, cty, and cbx (=cby), respectively, which are related to each monocell and stored in the luminance center of gravity calculation memory M5. When the processing of step S54 is performed on all the pixels PX included in the input image data for one frame, the values of ctx, cty, and cbx (=cby) are calculated for each monocell.
In step S55, the image processing device 10 determines whether the calculations in steps S53 and S54 for all the pixels PX included in the input image data have been completed.
In step S55, it may not be determined that the calculations in steps S53 to S54 for all the pixels PX included in the input image data have been completed. In this case, in step S56, the image processing device 10 changes the pixel PX of the calculation target in the input image data to the next pixel PX, and repeats steps S53 to S55.
On the other hand, in step S55, if it is determined that the calculations in steps S53 and S54 for all the pixels PX included in the input image data have been completed, then in step S57, the luminance center of gravity calculation unit 11D calculates the luminance center of gravity LC. The method of calculating the luminance center of gravity LC is as described above. That is, the luminance center of gravity LC is calculated by performing the calculations of Expression (6) and Expression (7). The luminance center of gravity calculation unit 11D stores the calculated value of the luminance center of gravity LC in the luminance center of gravity calculation memory M5.
Next, in step S58, the filter calculation unit 11E calculates the correction factors of the filter coefficients of the two dimensional filter F by using the value of the luminance center of gravity LC, and corrects the filter coefficients of the two dimensional filter F by using the calculated correction factors. Thereafter, in step S59, the filter processing unit 11F performs filter processing on each monocell (cell CE) by using the corrected filter coefficients of the two dimensional filter.
As illustrated in
The image processing device 10 according to a third embodiment will be described with reference to
In the present embodiment, the backlight BL includes the plurality of light-emitting regions LER capable of independently adjusting a light emission amount of each of the plurality of light-emitting regions LER. That is, in the present embodiment, the image processing device 10 performs the local dimming. The image processing device 10 of the present embodiment includes a backlight data generation unit 13 and a first panel luminance distribution calculation unit 14. The image processing device 10 of the present embodiment includes a second data generation unit 22 instead of the second data generation unit 12.
The backlight data generation unit 13 generates the backlight data for controlling the respective light emission amounts of the plurality of light-emitting regions LER based on the input image data. The first panel luminance distribution calculation unit 14 calculates the luminance distribution (monocell luminance distribution data) at the position of the second panel CL with respect to light traveling from the first panel WB to the second panel CL based on the backlight data and the first data. The second data generation unit 22 generates second data based on the input image data and the monocell luminance distribution data.
Unlike the image processing device 10 of the first and second embodiments, the image processing device 10 of the present embodiment generates the backlight data for controlling the output of the plurality of light-emitting regions LER based on the input image data. Backlight data is data corresponding to a resolution of 6×4. The backlight data generation unit 13 generates, from the input image data, a value of the output of each of the plurality of light-emitting regions LER constituting the backlight BL (e.g., a lighting rate=actual luminance value/maximum luminance value).
The backlight data generation unit (backlight luminance distribution calculation unit) 13 acquires, as an example, a representative value of the input gray scale values of several picture elements PE, that is, several subpixels, included in one virtual region facing one light-emitting region LER. The representative value is, for example, the maximum value, the average value, the median value, the value of 80% of the maximum value, or the like of the input gray scale values of the several picture elements PE included in one virtual region facing one certain light-emitting region LER.
Thereafter, the backlight data generation unit 13 generates, as the value of the output of the one certain light-emitting region LER, a value obtained by dividing the representative value of the input gray scale values of the several picture elements PE in the one virtual region by the upper limit value of the input gray scale values. The upper limit value of the input gray scale values refers to a maximum value of the input gray scale values. The backlight data generation unit 13 outputs the value of the output of each light-emitting region LER obtained in this way as data (backlight data) for controlling the backlight. The backlight drive unit 40 controls the output of each light-emitting region LER of the backlight BL according to the backlight data.
The first panel luminance distribution calculation unit (monocell luminance distribution calculation unit) 14 calculates the monocell luminance distribution data by using the backlight data and the first data. The monocell luminance distribution data is a luminance distribution at the position of the second panel CL with respect to light emitted from each light-emitting region LER of the backlight BL, passing through each cell CE of the first panel, and traveling toward each picture element PE of the second panel CL. The first panel luminance distribution calculation unit 14 may calculate the monocell luminance distribution data by using a point spread function (PSF) for calculating the distribution of the luminance of light traveling from the light-emitting regions LER included in the backlight BL to the cells CE included in the first panel WB. The first panel luminance distribution calculation unit 14 may calculate the monocell luminance distribution data by using a point spread function (PSF) for calculating the distribution of the luminance of light traveling from the cells CE included in the first panel WB to the picture elements PE included in the second panel CL.
The second data generation unit (main cell drive value calculation unit) 22 generates the second data for controlling the aperture ratios of the plurality of picture elements PE by correcting the input image data based on the input image data and the monocell luminance distribution data. The second data is generated so as to compensate for the lack of luminance caused by adjusting the amount of light traveling from the backlight BL to each picture element PE by controlling the light emission luminance of each light-emitting region LER of the backlight BL and controlling the aperture ratio of each cell CE of the first panel. Thereafter, the second data generation unit 12 transmits the second data to the second panel drive unit 30.
Note that in the first embodiment, the second data generation unit 12 also generates monocell luminance distribution data. On the other hand, in the present embodiment, it is necessary to use the backlight data in addition to the first data in order to calculate the monocell luminance distribution data. Thus, the image processing device 10 of the present embodiment includes the first panel luminance distribution calculation unit 14 in addition to the second data generation unit 22. However, the configuration of the processing execution portion in the image processing device 10 can be appropriately selected. For example, the image processing device 10 of the present embodiment the second data generation unit 12 may generate the monocell luminance distribution data without including the first panel luminance distribution calculation unit 14. Conversely, the image processing device 10 of the first embodiment may include a processing unit for calculating the monocell luminance distribution data separately from the second data generation unit 12.
As illustrated in
As shown in
In step S71, the backlight data generation unit (backlight luminance distribution calculation unit) 13 calculates a backlight data, which is control data for each light-emitting region LER of the backlight BL, based on the input image data read in step S1. In step S72, the first panel luminance distribution calculation unit (monocell luminance distribution calculation unit) 14 calculates the monocell luminance distribution data. In step S73, the backlight drive unit 40 drives the backlight BL based on the backlight data calculated in step S71.
As in the image processing device 10 of the present embodiment, even when the backlight BL performs local dimming, the same effects as those obtained by the image processing devices 10 of other embodiments can be obtained. That is, the aperture ratio of each cell CE continuously changes both when the high luminance region BA in the input image data IM moves within a certain cell CE and when it moves across the plurality of cells CE. Thus, flicker can be suppressed.
The image processing device according to a fourth embodiment will be described with reference to
As illustrated in
The image processing device according to a fifth embodiment will be described with reference to
As illustrated in
As illustrated in
The luminance of one common region CR where two adjacent cells CE overlap each other is changed by a combination of the controls of the transmittances of the two adjacent cells CE. Details of this configuration and control are disclosed in US2021/0304686.
Even with the display device 1 in which the plurality of cells CE having such a relationship are employed, the same effect as that obtained by the display device 1 of the above-described first to fourth embodiments can be obtained.
The image processing device and the control method of the image processing device of each of the above-described embodiments may be combined as long as they do not contradict each other. For example, the processing using the plurality of monocell data calculation line memories described as the second example of the image processing device 10 of the first embodiment may be combined with the image processing device described as the third or fourth embodiment. The image processing device 10 and the image processing method of other embodiments may be combined with each other as long as they do not contradict each other.
While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing from the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2023-078370 | May 2023 | JP | national |