This application is a U.S. National Phase of International Patent Application No. PCT/JP2017/007464 filed on Feb. 27, 2017, which claims priority benefit of Japanese Patent Application No. JP 2016-065533 filed in the Japan Patent Office on Mar. 29, 2016. Each of the above-referenced applications is hereby incorporated herein by reference in its entirety.
The present disclosure relates to a liquid crystal display apparatus, a liquid crystal display control method, and a program. In more details, the present disclosure relates to a liquid crystal display apparatus, a liquid crystal display control method, and a program that realize high quality display with reduced flicker.
At present, liquid crystal display apparatuses are used in various display devices such as televisions, PCs, and smartphones.
Many of the liquid crystal display apparatuses are driven by an AC voltage to avoid degradation of liquid crystal. As a driving method of a liquid crystal panel by an AC voltage, there are a dot inversion driving method of switching positive and negative polarities on a pixel basis, a line inversion driving method of switching positive and negative polarities on a line basis, a frame inversion driving method of switching positive and negative polarities on a frame basis, and the like.
The liquid crystal panel is driven by using any of the methods or in combination of the methods.
However, such a driving method has a problem of occurrence of flicker caused by a voltage difference between positive and negative polarities.
Note that there is Patent Document 1 (Japanese Patent Application Laid-Open No. 2011-164471) and the like, for example, as a conventional technology disclosing the problem of flicker in a liquid crystal display apparatus.
Patent Document 1 discloses a configuration in which a light shielding body is provided on a liquid crystal panel and measures against flicker caused by a special factor are applied.
However, recently, high-definition panels, such as 4K displays, become popular and display images have been made finer, and the flicker becomes more conspicuous accordingly, causing a problem of an increase in visual discomfort.
Furthermore, it may be difficult to observe the flicker or it may be easy to observe the flicker depending on an individual difference of the liquid crystal panel and characteristics of the display image, and there is a problem that uniform control is difficult.
Although Patent Document 1 above and other conventional technologies disclose various flicker reduction configurations, they fail to disclose a configuration to execute flicker reduction processing according to characteristics of the liquid crystal panel or characteristics of the display image.
Patent Document 1: Japanese Patent Application Laid-Open No. 2011-164471
The present disclosure has been made in view of the above-described problems, and an object of the present disclosure is to provide a liquid crystal display apparatus, a liquid crystal display control method, and a program that perform control in consideration of characteristics of a liquid crystal panel and characteristics of a display device, and realize effective flicker reduction, for example.
A first aspect of the present disclosure is
a liquid crystal display apparatus including:
a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device;
a characteristic amount extraction unit configured to extract a characteristic amount of an image to be corrected;
a correction parameter calculation unit configured to calculate a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate; and
an image correction unit configured to execute, for the image to be corrected, correction processing to which the correction parameter has been applied.
Moreover, a second aspect of the present disclosure is
a liquid crystal display apparatus including:
an offline processing unit configured to calculate a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device;
a storage unit configured to store the characteristic amount change rate calculated by the offline processing unit; and
an online processing unit configured to apply the characteristic amount change rate stored in the storage unit and execute correction processing of an image to be corrected, in which
the online processing unit includes
a characteristic amount extraction unit configured to extract a characteristic amount of the image to be corrected,
a correction parameter calculation unit configured to calculate a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate, and
an image correction unit configured to execute, for the image to be corrected, correction processing to which the correction parameter has been applied.
Moreover, a third aspect of the present disclosure is
a liquid crystal display control method executed in a liquid crystal display apparatus,
the liquid display apparatus including a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device,
the liquid crystal display control method including:
by a characteristic amount extraction unit, extracting a characteristic amount of an image to be corrected;
by a correction parameter calculation unit, calculating a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate; and
by an image correction unit, executing, for the image to be corrected, correction processing to which the correction parameter has been applied and outputting the image to be corrected on a display unit.
Moreover, a fourth aspect of the present disclosure is
a liquid crystal display control method executed in a liquid crystal display apparatus, the liquid crystal display control method including:
by an offline processing unit,
executing an offline processing step of calculating a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device, and storing the characteristic amount change rate in a storage unit; and
by an online processing unit,
extracting a characteristic amount of an image to be corrected,
calculating a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate stored in the storage unit, and
executing, for the image to be corrected, correction processing to which the correction parameter has been applied, and displaying the image to be corrected on a display unit.
Moreover, a fifth aspect of the present disclosure is
a program for executing liquid crystal display control processing in a liquid crystal display apparatus,
the liquid display apparatus including a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device,
the program generating a corrected image for a display unit output by executing:
characteristic amount extraction processing of an image to be corrected in a characteristic amount extraction unit;
processing of calculating a correction parameter for reducing flicker based on a characteristic amount of the image to be corrected and the characteristic amount change rate in a correction parameter calculation unit; and
correction processing to which the correction parameter has been applied for the image to be corrected in an image correction unit.
Moreover, a sixth aspect of the present disclosure is
a program for executing liquid crystal display control processing in a liquid crystal display apparatus, the program generating a corrected image for a display unit output by causing:
an offline processing unit to execute offline processing of calculating a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device, and storing the characteristic amount change rate in a storage unit; and
an online processing unit to execute
characteristic amount extraction processing of an image to be corrected,
processing of calculating a correction parameter for reducing flicker based on the characteristic amount of the image to be corrected and the characteristic amount change rate stored in the storage unit, and
correction processing to which the correction parameter has been applied, for the image to be corrected.
Note that the program of the present disclosure is, for example, a program that can be provided by a storage medium or a communication medium provided in a computer readable format to an information processing apparatus or a computer system that can execute various program codes. By providing such a program in the computer readable format, processing according to the program is realized on the information processing apparatus or the computer system.
Still other objects, characteristics, and advantages of the present disclosure will be apparent from detailed description based on embodiments and attached drawings of the present disclosure to be described below. Note that the system in the present specification is a logical aggregate configuration of a plurality of devices, and is not limited to devices having respective configurations within the same housing.
According to a configuration of one embodiment of the present disclosure, effective image correction processing for reducing flicker according to characteristics of images is executed, and the flicker of an image to be displayed on the liquid crystal display apparatus can be effectively reduced.
Specifically, characteristic amount change rate data which is the change rate between the characteristic amount of the sample image and the characteristic amount of the sample image output to the liquid crystal display device is acquired in advance and stored in the storage unit. The correction parameter for reducing flicker is calculated on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate data of the sample images stored in the storage unit. The correction processing to which the calculated correction parameter has been applied is executed for the image to be corrected to generate a display image. As the characteristic amount, for example, the interframe luminance change amount, the interline luminance conversion amount, or the interframe motion vector is used.
With the configuration, the effective image correction processing for reducing flicker according to the characteristics of images is executed, and the flicker of the image to be displayed on the liquid crystal display apparatus can be effectively reduced.
Note that effects described in the present specification are merely illustrative and are not restrictive, and there may be additional effects.
Hereinafter, details of a liquid crystal display apparatus, a liquid crystal display control method, and a program of the present disclosure will be described with reference to the drawings. Note that the description will be given according to the following items.
1. Outline of image display processing in liquid crystal display apparatus
2. Configuration for realizing flicker reduction processing corresponding to image characteristics and display unit characteristics
3. Configuration example and processing example of offline processing unit
4. Configuration example and processing example of online processing unit
5. Sequence of processing executed by liquid crystal display apparatus
5-1. Sequence of processing executed by offline processing unit
5-2. Sequence of Processing Example 1 executed by online processing unit
5-3. Sequence of Processing Example 2 executed by online processing unit
6. Hardware configuration example of liquid crystal display apparatus
7. Summary of configuration of present disclosure
First, an outline of image display processing in a liquid crystal display apparatus will be described.
There is a plurality of driving methods for a liquid crystal panel. For example, there are a common DC method, a common inversion method, and the like.
The horizontal axes of both the graphs represent time (t).
The vertical axis represents a gate voltage in the graph of (a) the clock signal and a source voltage in the graph of (b) the cell voltage.
The source voltage varies according to the clock signal.
The curve in the graph of (b) the cell voltage illustrates a curve illustrating the change of the cell voltage of a certain pixel in three consecutive image frames 1 to 3 displayed on the liquid crystal panel.
A difference from a common voltage illustrated by the dotted line in approximately the center of the vertical axis is output as luminance (brightness) of the pixel.
In the graph of (b), the voltage is larger than the common voltage in the frame 1 and the voltage is smaller than the common voltage in the frame 2.
Since the difference from the common voltage corresponds to the brightness of the pixel, if a difference P of the frame 1 and a difference Q of the frame 2 are equal, the luminance of the pixel in each frame is constant and flicker does not occur.
However, actual source voltage change exhibits a curve as illustrated in
The difference P of the frame 1 is smaller than the difference Q of the frame 2 and a frame luminance difference of Q−P=ΔV occurs.
This frame luminance difference ΔV causes a difference in brightness in the pixel at the same position of the frame 1 and the frame 2.
Similar brightness fluctuation is repeated in the frames 1, 2, 3, 4, and the like. As a result, flicker occurs.
As a technique for reducing the flicker of the liquid crystal panel due to the driving on a frame basis, there is a technique of switching an applied voltage on a line basis of one image frame or a dot (pixel) basis.
These driving methods will be described with reference to
An applied voltage (+) or (−) of each pixel is illustrated from an image frame f1 to image frames f2, f3, f4, and the like.
In the example illustrated in
In the example illustrated in
The flicker is unlikely to be perceived by the applied voltage switching processing as illustrated in
However, although the method illustrated in
This phenomenon will be described with reference to
In these image frames, an object A moving in a right direction is displayed. A line pq illustrated in the frames 1 and 2 is one boundary line of the object A.
The boundary line pq in the frame 1 is displayed at a position shifted in the right by one pixel in the next frame 2.
When such movement of the object occurs, the boundary line pq of the object A is always positioned along the line where the applied voltage is (+) in consecutive image frames.
As a result, the boundary line pq of the object A is continuously displayed as pixels having a fixed luminance difference from adjacent pixels, that is, pixels of the applied voltage (−), and is observed as if a line with luminance different from the surroundings flows on the screen.
As described above, even when the measures against flicker described with reference to
Next, a configuration for realizing flicker reduction processing corresponding to image characteristics and display unit characteristics will be described.
A liquid crystal display apparatus 10 of the present disclosure includes an offline processing unit 100, a display device 110, a database 150, and an online processing unit 200.
The display device 110 includes a panel drive unit 111 and a liquid crystal panel 112.
Note that the liquid crystal display apparatus 10 illustrated in
The offline processing unit 100 sequentially inputs sample images 20 having various different characteristics. Further, the offline processing unit 100 inputs output image data and the like of the sample image displayed on the display device 110.
The offline processing unit 100 analyzes characteristics of the sample image 20 and the output image displayed on the display device 110, generates data to be applied to image correction processing in the online processing unit 200 on the basis of an analysis result, and accumulates the data in the storage unit (database) 150.
The image correction processing executed in the online processing unit 200 is correction processing executed for the purpose of reducing flicker, and the offline processing unit 100 compares a characteristic amount of the sample image having various characteristics with a characteristic amount of the output image output to the display device 110, generates data to be applied to correction processing for executing optimal flicker reduction for various images, and accumulates the data in the storage unit (database) 150.
The online processing unit 200 inputs image to be corrected data 50, executes the image correction processing using the data stored in the storage unit (database) 150, and outputs a corrected image to the display device 110 to display the corrected image.
Note that the image correction processing in the online processing unit 200 is correction processing executed for the purpose of reducing flicker.
The data accumulation processing for the storage unit (database) 150 in the offline processing unit 100 is executed prior to the image correction processing in the online processing unit 200.
After the data is stored in the storage unit (database) 150, the offline processing unit is disconnected and the online processing unit 200 can execute correction for reducing flicker using the data stored in the storage unit 150, and can display an image on the display device 110.
Accordingly, a configuration in which the offline processing unit 100 is omitted may be used as a configuration example of the liquid crystal display apparatus of the present disclosure.
Hereinafter, specific configuration examples and processing examples of the offline processing unit 100 and the online processing unit 200 will be described in order.
Next, a configuration and a processing example of the offline processing unit 100 of the liquid crystal display apparatus 10 illustrated in
As described with reference to
As illustrated in
The offline processing unit 100 inputs the sample image 20 having various different characteristics, generates the data to be applied to the image correction processing in the online processing unit 200, and accumulates the data in the storage unit (database) 150.
Note that
The display device 110 is the display device 110 illustrated in
As described above, the display device 110 is an independent element and is also used as a constituent element of the offline processing unit 100 and of the online processing unit 200.
Processing executed by the offline processing unit 100 illustrated in
The image characteristic amount calculation unit 101 inputs the sample image 20 having various different characteristics, analyzes the input sample image 20, and calculates various characteristic amounts from each sample image.
An example of the characteristic amounts acquired from the sample image 20 by the image characteristic amount calculation unit 101 will be described with reference to
As illustrated in
(1) An interframe luminance change amount: ΔYframe(in)(n)
(2) An interline luminance change amount: ΔYline(in)(n)
(3) An interframe motion vector: MVframe(in)(n)
Note that the input sample images 20 include various different images such as moving images and still images. In the case of a moving image, a moving object is included in consecutive image frames.
“(1) The interframe luminance change amount: ΔYframe(in) (n)” is a difference in image frame average luminance between two consecutive image frames.
n in ΔYframe(in) (n) means a frame number, ΔY means a difference in luminance (Y), and (in) means an input image. ΔYframe(in) (n) means a difference in frame average luminance between two consecutive input frames of a frame n and a frame n+1.
“(2) The interline luminance change amount: ΔYline(in) (n)” is a difference in pixel line average luminance between adjacent pixel lines in one image frame.
n in ΔYline(in) (n) means a frame number, ΔY means a difference in luminance (Y), and (in) means an input image. ΔYline(in) (n) means a difference in pixel line average luminance of an input frame n.
Note that the interline luminance change amount is calculated for each of a horizontal line and a vertical line.
“(3) The interframe motion vector: MVframe(in) (n)” is a motion vector indicating a motion amount between frames calculated from two consecutive image frames.
n in MVframe(in) (n) means a frame number, MV means a motion vector, and (in) means an input image. MVframe(in) (n) means a motion vector indicating a motion amount of two consecutive input frames of a frame n and a frame n+1.
The image characteristic amount calculation unit 101 calculates these three types of image characteristic amounts, and inputs the calculated image characteristic amounts to the input/output image characteristic amount change rate calculation unit 103, for example.
Next, processing executed by the image temporal change amount calculation unit 102 will be described.
The image temporal change amount calculation unit 102, for example, calculates a temporal change amount of each characteristic amount, using the image characteristic amounts of two consecutive frames input as the sample image 20, that is, the image frame n and the image frame n+1.
An example of the temporal change amount of input image characteristic amounts acquired from the two consecutive frames (frames n and n+1) input as the sample image 20 by the image temporal change amount calculation unit 102 will be described with reference to
As illustrated in
The image temporal change amount calculation unit 102 acquires the temporal change amounts of the following image characteristic amounts acquired from the two consecutive frames (frames n and n+1) input as the sample image 20.
α1in(n), α2in(n), and α3in(n) are expressed by the following expressions (Expressions 1a to 1c).
In this manner, the image temporal change amount calculation unit 102 acquires the temporal change amounts of the three types of image characteristic amounts acquired from the two consecutive frames (frames n and n+1) input as the sample image 20.
The image temporal change amount calculation unit 102 calculates the temporal change amounts of the three types of image characteristic amounts, and inputs the calculated temporal change amounts of the image characteristic amounts to the input/output image characteristic amount change rate calculation unit 103, for example.
Next, processing executed by the input/output image characteristic amount change rate calculation unit 103 and the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 will be described.
The drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 acquires a temporal change amount of a drive voltage of the sample image 20 displayed on the display device 110. The drive voltage corresponds to the cell voltage described with reference to
In other words, the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 calculates the temporal change amounts (α1out(n), α2out(n), and α3out(n)) of the characteristic amounts of the image (output image) displayed on the liquid crystal panel 112.
The temporal change amounts (α1out(n), α2out(n), and α3out(n)) of the characteristic amounts of the image (output image) displayed on the liquid crystal panel 112 are the following temporal change amounts of the characteristic amounts of the output image.
(1) The temporal change amount of the interframe luminance change amount: α1Out(n)
(2) The temporal change amount of the interline luminance change amount: α2Out(n)
(3) The temporal change amount of the interframe motion vector: α3Out(n)
The input/output image characteristic amount change rate calculation unit 103 calculates
characteristic amount change rates (α1 (n), α2 (n), and α3 (n)) of the input/output images by inputting
the characteristic amount temporal change amounts (α1in(n), α2in(n), and α3in(n)) corresponding to the input image (input sample image) input from the image temporal change amount calculation unit 102,
the characteristic amount temporal change amounts (α1out(n), α2out(n), and α3out(n)) corresponding to the output image (output sample image) input from the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104, and
the temporal change amounts of the image characteristic amounts of each of the input/output images.
“
(1) An interframe luminance change amount: ΔYframe(in)(n)
(2) An interline luminance change amount: ΔYline(in)(n)
(3) An interframe motion vector: MVframe(in)(n)
“
“(c) The temporal change amount of the output image characteristic amount” is calculated by the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 illustrated in
As illustrated in
“(c) The temporal change amounts of the output image characteristic amounts” (α1out(n), α2out(n), and α3out(n))” calculated by the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 are expressed by the following expressions (Expressions 2a to 2c).
In this manner, the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 acquires the temporal change amounts of the characteristic amounts of the output image in the display device 110 of the input sample image 20, in other words, the temporal change amounts of the three types of image characteristic amounts acquired from the output image of the two consecutive frames (frames n and n+1).
The input/output image characteristic amount change rate calculation unit 103 inputs the respective data illustrated in
Specifically, the input/output image characteristic amount change rate calculation unit 103 calculates the characteristic amount change rates (α1(n), α2(n), and α3(n)) of the input/output images illustrated in
the temporal change amounts of the input image characteristic amounts in
in other words, the characteristic amount temporal change amounts (α1in(n), α2in(n), and α3in(n)) corresponding to the input image (input sample image) input from the image temporal change amount calculation unit 102; and
the characteristic amount temporal change amounts corresponding to the output image (output sample image) in
in other words, the characteristic amount temporal change amounts (α1out(n), α2out(n), and α3out(n)) corresponding to the output image (output sample image) input from the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104.
The characteristic amount change rates (α1 (n), α2 (n), and α3 (n)) of the input/output images are expressed by the following expressions (Expressions 3a to 3c).
In this manner, the input/output image characteristic amount change rate calculation unit 103 inputs the temporal change amounts of the image characteristic amounts of the input/output images related to the sample image 20, and calculates the characteristic amount change rates (α1(n), α2(n), and α3(n)) of the input/output images illustrated in
The calculated characteristic amount change rates (α1(n), α2(n), and α3(n)) of the input/output images are stored in the storage unit (database) 150 as correspondence data to the data of the input image characteristic amounts.
“Correspondence data 120 between the input image characteristic amount and the characteristic amount change rate of the input/output images” which is “correspondence data 120 between the input image characteristic amount and the characteristic amount change rate of the input/output images” illustrated in
in the following data described with reference to
“
(1) An interframe luminance change amount: ΔYframe(in)(n)
(2) An interline luminance change amount: ΔYline(in)(n)
(3) An interframe motion vector: MVframe(in)(n)
“
The input/output image characteristic amount change rate calculation unit 103 generates correspondence data of the two data:
(a) the image characteristic amount; and
(d) the characteristic amount change rate of the input/output images
on a characteristic amount basis, and stores the correspondence data in the storage unit (database) 150.
Specifically, as illustrated in the lower graph in
(1) input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) input/output image characteristic amount change rate data corresponding to the interframe motion vector, and stores the data in the storage unit (database) 150.
The “(1) input/output image characteristic amount change rate data corresponding to the interframe luminance change amount” is correspondence data indicating a correspondence relationship between
(1A) the interframe luminance change amount: ΔYframe(in)(n); and
(1B) the characteristic amount (interframe luminance change amount) change rate of the input/output images: α1(n), as illustrated in
The “(2) input/output image characteristic amount change rate data corresponding to the interline luminance change amount” is correspondence data indicating a correspondence relationship between
(2A) the interline luminance change amount: ΔYline(in)(n); and
(2B) the characteristic amount (interline luminance change amount) change rate of the input/output images: α2(n), as illustrated in
The “(3) input/output image characteristic amount change rate data corresponding to the interframe motion vector” is correspondence data indicating a correspondence relationship between
(3a) the interframe motion vector: MVframe(in)(n); and
(3d) the characteristic amount (interframe motion vector) change rate of the input/output images: α3(n), as illustrated in
The input/output image characteristic amount change rate calculation unit 103 thus generates correspondence data of the two data:
(a) the image characteristic amount; and
(d) the characteristic amount change rate of the input/output images,
for each of the three characteristic amounts, and stores the correspondence data in the storage unit (database) 150.
The data stored in the storage unit (database) 150 is data to be applied to the image correction processing in the online processing unit 200.
The offline processing unit 100 inputs the sample image 20 having various different characteristics, further inputs the output image data of the sample image displayed on the display device 110, analyzes characteristics of the input/output images, generates the data to be applied to the image correction processing in the online processing unit 200 on the basis of an analysis result, and accumulates the data in the storage unit (database) 150.
In other words, the offline processing unit 100 inputs various images having different image characteristic amounts:
(1) the interframe luminance change amount;
(2) the interline luminance change amount; and
(3) the interframe motion vector,
as the sample image, and generates the correspondence data of the two data:
that is, the correspondence data illustrated as the three graphs in
Next, a configuration and a processing example of the online processing unit 200 of the liquid crystal display apparatus 10 illustrated in
The online processing unit 200 illustrated in
Note that the image correction processing in the online processing unit 200 is correction processing executed for the purpose of reducing flicker.
As illustrated in
Note that
The display device 110 is the display device 110 illustrated in
As described above, the display device 110 is an independent element and is also a constituent element of the offline processing unit 100 and of the online processing unit 200.
Processing executed by the online processing unit 200 illustrated in
The image characteristic amount calculation unit 201 inputs the image to be corrected 50, analyzes the input image to be corrected 50, and calculates various characteristic amounts from each image to be corrected.
The characteristic amount acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201 is the same type of characteristic amount as the characteristic amount acquired by the image characteristic amount calculation unit 101 of the offline processing unit 100 described with reference to
In other words, the image characteristic amount calculation unit 201 acquires the following image characteristic amounts from the image to be corrected 50.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
“(1) The interframe luminance change amount: ΔYframe(n)” is a difference in image frame average luminance between two consecutive image frames.
“(2) The interline luminance change amount: ΔYline(n)” is a difference in pixel line average luminance between adjacent pixel lines in one image frame.
Note that the interline luminance change amount is calculated for each of a horizontal line and a vertical line.
“(3) The interframe motion vector: MVframe(in) (n)” is a motion vector indicating a motion amount between frames calculated from two consecutive image frames.
The image characteristic amount calculation unit 201 calculates these three types of image characteristic amounts, in other words, image characteristic amounts 210 illustrated in
The correction parameter calculation unit 202 inputs
the image characteristic amounts 210, in other words, the following image characteristic amounts of the image to be corrected 50 from the image characteristic amount calculation unit 201.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline (n)
(3) An interframe motion vector: MVframe (n)
Moreover, the correction parameter calculation unit 202 inputs the following data described with reference to
(1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector, from the storage unit (database) 150, and inputs the database storage data.
The correction parameter calculation unit 202 calculates a correction parameter 250 for reducing flicker of the image to be corrected 50, using the input data, and outputs the calculated correction parameter 250 to the image correction unit 203.
A specific example of correction parameter calculation processing executed by the correction parameter calculation unit 202 will be described with reference to
(A1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(A2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(A3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector.
(B) The characteristic amounts acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201 are the following image characteristic amounts.
(B1) An interframe luminance change amount: ΔYframe (n)
(B2) An interline luminance change amount: ΔYline(n)
(B3) An interline luminance change amount: MVframe(n)
The correction parameter calculation unit 202 calculates one parameter in the correction parameters illustrated in
on the basis of the two data:
“(A1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount” stored in the storage unit (database) 150; and
“(B1) the interframe luminance change amount: ΔYframe(n)211” acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201.
Note that
This graph is data generated on the basis of the correspondence relationship data:
the storage data in the storage unit (database) 150 illustrated in
“(A1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount”; and
the interframe luminance change amount of the sample image: ΔYframe(in)(n) on the horizontal axis, and the characteristic amount (interframe luminance change amount) change rate of the input/output images: α1 on the vertical axis.
(C1) The temporal direction smoothing coefficient (Ft) is generated by replacing
the interframe luminance change amount: ΔYframe(in) (n) of the sample image on the horizontal axis of
the storage data in the storage unit (database) 150, in other words,
(A1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount
with the image characteristic amount acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201,
(B1) the interframe luminance change amount: ΔYframe (n), and
by further replacing α1 on the vertical axis with the temporal direction smoothing coefficient (Ft).
Note that the temporal direction smoothing coefficient (Ft) on the vertical axis may be set to
Ft=α1.
However, the temporal direction smoothing coefficient (Ft) calculated according to the following calculation expression
Ft=k·α1,
using a predefined multiplication parameter k, may be set on the vertical axis.
The correction parameter calculation unit 202 calculates one temporal direction smoothing coefficient (Ft), using the correspondence relationship data (graph) illustrated in
This processing will be described with reference to
Assuming that the following image characteristic amount acquired from the frame n of the image to be corrected 50 by the image characteristic amount calculation unit 201:
(B1) the interframe luminance change amount: ΔYframe(n)
is ΔYframe(n)271 on the horizontal axis of the graph (C1) in
The correction parameter calculation unit 202 obtains the temporal direction smoothing coefficient (Ft) corresponding to ΔYframe(n)271 according to the curve of the graph (C1) in
In the example in
The correction parameter calculation unit 202 outputs the temporal direction smoothing coefficient (Ft(n)) to the image correction unit 203 as the temporal direction smoothing coefficient (Ft) to be applied to the frame n.
The temporal direction smoothing coefficient (Ft(n)) is one correction parameter corresponding to the frame included in the correction parameter 250(n) illustrated in
Referring back to
Moreover, the correction parameter calculation unit 202 calculates one parameter in the correction parameters illustrated in
(C2) a spatial direction smoothing coefficient (Fs)
on the basis of the two data:
“(A2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount” stored in the storage unit (database) 150 illustrated in
“(B2) the interline luminance change amount: ΔYline(n)212” acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201.
Note that
This graph is data generated on the basis of the correspondence relationship data:
the storage data in the storage unit (database) 150 illustrated in
“(A2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount”; and the interline luminance change amount of the sample image: ΔYline(in)(n) on the horizontal axis, and the characteristic amount (interline luminance change amount) change rate of the input/output images: α2 on the vertical axis.
(C2) The spatial direction smoothing coefficient (Fs) is generated by replacing
the interframe luminance change amount: ΔYline(in) (n) of the sample image on the horizontal axis of
the storage data in the storage unit (database) 150, in other words,
(A2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount
with the image characteristic amount acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201,
(B2) the interline luminance change amount: ΔYline (n), and
by further replacing α2 on the vertical axis with the spatial direction smoothing coefficient (Fs).
Note that the spatial direction smoothing coefficient (Fs) on the vertical axis may be set to
Fs=α2.
However, the spatial direction smoothing coefficient (Fs) calculated according to the following calculation expression
Fs=k·α2,
using a predefined multiplication parameter k, may be set on the vertical axis.
The correction parameter calculation unit 202 calculates the one spatial direction smoothing coefficient (Fs), using the correspondence relationship data (graph) illustrated in
This processing will be described with reference to
Assuming that the following image characteristic amount acquired from the frame n of the image to be corrected 50 by the image characteristic amount calculation unit 201:
(B2) the interline luminance change amount: ΔYline(n)
is ΔYline(n)272 on the horizontal axis of the graph (C2) in
In the example in
The correction parameter calculation unit 202 outputs the spatial direction smoothing coefficient (Fs(n)) to the image correction unit 203 as the spatial direction smoothing coefficient (Fs) to be applied to the frame n.
The temporal direction smoothing coefficient (Fs(n)) is one correction parameter corresponding to the frame included in the correction parameter 250(n) illustrated in
Referring back to
Moreover, the correction parameter calculation unit 202 calculates one parameter in the correction parameters illustrated in
(C3) a smoothing processing gain value (G) on the basis of the two data: “(A3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector” stored in the storage unit (database) 150; and
“(B3) the interframe motion vector: MVframe(n)213” acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201.
Note that
This graph is data generated on the basis of the correspondence relationship data:
the storage data in the storage unit (database) 150 illustrated in
(A3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector; and
the interframe motion vector of the sample image: MVframe(in)(n) on the horizontal axis, and the characteristic amount (interframe motion vector) change rate of the input/output images: α3 on the vertical axis.
(C3) The smoothing processing gain value (G) is generated by replacing
the interframe motion vector: MVframe(in) (n) of the sample image on the horizontal axis of
the storage data in the storage unit (database) 150, in other words,
(A3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector
with the image characteristic amount acquired from the image to be corrected 50 by the image characteristic amount calculation unit 201,
(B3) the interframe motion vector: MVframe (n), and
by further replacing α3 on the vertical axis with the smoothing processing gain value (G).
Note that the smoothing processing gain value (G) on the vertical axis may be set to
G=α3.
However, the smoothing processing gain value (G) calculated according to the following calculation expression
G=k·α3,
using a predefined multiplication parameter k, may be set on the vertical axis.
The correction parameter calculation unit 202 calculates the one smoothing processing gain value (G), using the correspondence relationship data (graph) illustrated in
This processing will be described with reference to
Assuming that the following image characteristic amount acquired from the frame n of the image to be corrected 50 by the image characteristic amount calculation unit 201:
(B3) the interframe motion vector: MVframe(n)
is ΔMVframe(n)273 on the horizontal axis of the graph (C3) in
The correction parameter calculation unit 202 obtains the smoothing processing gain value (G) corresponding to ΔMVframe(n)273 according to the curve of the graph (C3) in
In the example in
The correction parameter calculation unit 202 outputs the smoothing processing gain value (G(n)) to the image correction unit 203 as the smoothing processing gain value (G) to be applied to the frame n.
The smoothing processing gain value (G(n)) is one correction parameter corresponding to the frame included in the correction parameter 250(n) illustrated in
In this manner, the correction parameter calculation unit 202
inputs the data:
(1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector,
from the storage unit (database) 150, and
inputs the following image characteristic amounts of the image to be corrected 50 from the image characteristic amount calculation unit 201.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
The correction parameter calculation unit 202 calculates the following image correction parameters illustrated in
(C1) the temporal direction smoothing coefficient (Ft);
(C2) the spatial direction smoothing coefficient (Fs); and
(C3) the smoothing processing gain value (G)
on the basis of the input data.
The above three types of image correction parameters 250 calculated by the correction parameter calculation unit 202 are input to the image correction unit 203 of the online processing unit 200 illustrated in
The image correction unit 203 executes the image correction processing for the image to be corrected 50, applying the following correction parameters 250 input from the correction parameter calculation unit 202.
(C1) The temporal direction smoothing coefficient (Ft)
(C2) The spatial direction smoothing coefficient (Fs)
(C3) The smoothing processing gain value (G)
The corrected image to which the above correction parameters have been applied and corrected is output to the display device 110 and displayed.
The correction parameters (C1) to (C3) are correction parameters that produce the flicker reduction effect, and are correction parameters that reflect the characteristics of the input image and the display device output characteristics.
Therefore, optimum flicker reduction processing according to characteristics of an image and characteristics of a display device becomes possible by the image correction to which the correction parameters are applied.
Next, a sequence of processing executed by the liquid crystal display apparatus will be described.
Sequences of processing executed by the liquid crystal display apparatus will be described with reference to the flowcharts illustrated in
The flowcharts illustrated in
(1)
(2)
(3)
Hereinafter, each processing sequence will be described according to each flow.
First, the sequence of the processing executed by the offline processing unit 100 will be described with reference to the flowchart illustrated in
First, as described with reference to
Note that the processing according to the flowchart illustrated in
Hereinafter, the processing of each step of the flowchart illustrated in
(Step S101)
First, in step S101, the offline processing unit 100 inputs the sample image.
(Step S102)
Next, in step S102, the offline processing unit 100 extracts the characteristic amounts of the sample image.
This processing is the processing executed by the image characteristic amount calculation unit 101 of the offline processing unit 100 illustrated in
As described with reference to
(1) An interframe luminance change amount: ΔYframe(in)(n)
(2) An interline luminance change amount: ΔYline(in)(n)
(3) An interframe motion vector: MVframe(in)(n)
(Step S103)
Next, in step S103, the offline processing unit 100 calculates the temporal change amounts of the sample image characteristic amounts.
This processing is the processing executed by the image temporal change amount calculation unit 102 of the offline processing unit 100 illustrated in
The image temporal change amount calculation unit 102 acquires the temporal change amounts of the following image characteristic amounts acquired from the two consecutive frames (frames n and n+1) input as the sample image 20.
(1) The temporal change amount of the interframe luminance change amount: α1in(n)
(2) The temporal change amount of the interline luminance change amount: α2in(n)
(3) The temporal change amount of the interframe motion vector: α3in(n)
The temporal change amounts [α1in(n), α2in(n), and α3in(n)] of the image characteristic amounts are the data described with reference to
(Step S104)
Next, in step S104, the offline processing unit 100 calculates the characteristic amount temporal change amounts of the output image to be output to the liquid crystal panel on the basis of the input sample image.
This processing is the processing executed by the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 of the offline processing unit 100 illustrated in
The drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 of the offline processing unit 100 illustrated in
In other words, the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104 calculates the temporal change amounts (α1out(n), α2out(n), and α3out(n)) of the characteristic amounts of the image (output image) displayed on the liquid crystal panel 112.
This data is the data illustrated in
(Step S105)
Next, in step S105, the offline processing unit 100 calculates the characteristic amount change rates of the input/output image of the sample image.
This processing is the processing executed by the input/output image characteristic amount change rate calculation unit 103 of the offline processing unit 100 illustrated in
The input/output image characteristic amount change rate calculation unit 103 calculates
characteristic amount change rates (α1 (n), α2 (n), and α3 (n)) of the input/output images by inputting
the characteristic amount temporal change amounts (α1in(n), α2in(n), and α3in(n)) corresponding to the input image (input sample image) input from the image temporal change amount calculation unit 102,
the characteristic amount temporal change amounts (α1out(n), α2out(n), and α3out(n)) corresponding to the output image (output sample image) input from the drive voltage temporal change amount (light emission level temporal change amount) acquisition unit 104, and
the temporal change amounts of the image characteristic amounts of each of the input/output images.
The characteristic amount change rates (α1(n), α2(n), and α3(n)) of the input/output image calculated by the input/output image characteristic amount change rate calculation unit 103 are the characteristic amount change rate data of the input/output images illustrated in
(Step S106)
Next, in step S106, the offline processing unit 100 stores the correspondence relationship data between the characteristic amounts of the sample image and the characteristic amount change rates of the input/output images in the storage unit (database).
This processing is the processing executed by the input/output image characteristic amount change rate calculation unit 103 of the offline processing unit 100 illustrated in
This processing is the processing described with reference to
Specifically, as illustrated in the lower graph in
(1) input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) input/output image characteristic amount change rate data corresponding to the interframe motion vector, and stores the data in the storage unit (database) 150.
(Step S107)
Next, in step S107, the offline processing unit 100 determines whether the processing for all the sample images has been completed.
In the case where there is an unprocessed sample image, the processing of step S101 and the following steps is executed for the unprocessed image.
In a case where the processing for all the sample images has been completed, the processing is terminated.
The offline processing unit 100 inputs the sample image 20 having various different characteristics, further inputs the output image data of the sample image displayed on the display device 110, analyzes characteristics of the input/output images, generates the data to be applied to the image correction processing in the online processing unit 200 on the basis of an analysis result, and accumulates the data in the storage unit (database) 150, according to the flow illustrated in
Next, the sequence of the processing example 1 executed by the online processing unit 200 will be described with reference to the flowchart illustrated in
As described with reference to
Note that the image correction processing in the online processing unit 200 is correction processing executed for the purpose of reducing flicker.
Note that the processing according to the flowchart illustrated in
Hereinafter, the processing of each step of the flowchart illustrated in
(Step S201)
First, in step S201, the online processing unit 200 inputs the image to be corrected.
(Step S202)
Next, in step S202, the online processing unit 200 extracts the characteristic amounts of the image to be corrected.
This processing is the processing executed by the image characteristic amount calculation unit 201 of the online processing unit 200 illustrated in
The image characteristic amount calculation unit 201 acquires the following image characteristic amounts from the image to be corrected W50.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
“(1) The interframe luminance change amount: ΔYframe(n)” is a difference in image frame average luminance between two consecutive image frames.
“(2) The interline luminance change amount: ΔYline(n)” is a difference in pixel line average luminance between adjacent pixel lines in one image frame.
Note that the interline luminance change amount is calculated for each of a horizontal line and a vertical line.
“(3) The interframe motion vector: MVframe(in) (n)” is a motion vector indicating a motion amount between frames calculated from two consecutive image frames.
The image characteristic amount calculation unit 201 calculates these three types of image characteristic amounts, in other words, image characteristic amounts 210 illustrated in
(Step S203)
Next, in step S203, the online processing unit 200 selects one or more processing determined to have a high flicker reduction effect from the following processing on the basis of the image characteristic amounts extracted in step S202.
(a) Interframe luminance difference reduction processing
(b) Interline luminance difference reduction processing
(c) Luminance difference reduction processing according to a motion vector
For example, in step S202, the following characteristic amounts extracted from the image to be corrected, in other words:
(1) the interframe luminance change amount: ΔYframe(n);
(2) the interline luminance change amount: ΔYline (n); and
(3) the interframe motion vector: MVframe (n)
and predefined threshold values Th1 to Th3 are compared. When the above characteristic amounts are equal to or larger than the threshold values, it is determined that there is the flicker reduction effect by the processing (a) to (c).
Specifically, for example, the following determination processing is performed.
In the case where
(Determination Expression 1) the interframe luminance change amount: ΔYframe (n)≥Th1
is satisfied,
it is determined that there is the flicker reduction effect by (a) the interframe luminance difference reduction processing.
Furthermore,
in the case where
(Determination Expression 2) the interline luminance change amount: ΔYline (n)≥Th2
is satisfied,
it is determined that there is the flicker reduction effect by (b) the interline luminance difference reduction processing.
Furthermore,
in the case where
(Determination Expression 3) the interframe motion vector: MVframe(n)≥Th3
is satisfied,
it is determined that there is the flicker reduction effect by (c) the luminance difference reduction processing according to a motion vector.
Note that these determination processes can be performed on a pixel basis of the image to be corrected or on a pixel region basis configured by a plurality of pixels.
As described above, in step S203, the online processing unit 200 selects one or more processing determined to have a high flicker reduction effect from the following processing on the basis of the image characteristic amounts extracted in step S202.
(a) Interframe luminance difference reduction processing
(b) Interline luminance difference reduction processing
(c) Luminance difference reduction processing according to a motion vector
(Step S204)
Next, in step S204, the online processing unit 200 calculates the correction parameter to be applied to execute the processing selected from below as the processing having the flicker reduction effect in step S203, in other words:
(a) the interframe luminance difference reduction processing;
(b) the interline luminance difference reduction processing; and
(c) the luminance difference reduction processing according to a motion vector.
This processing is the processing executed by the correction parameter calculation unit 202 of the online processing unit 200 illustrated in
Note that the calculation of the correction parameter is executed on a region basis, the region being targeted for flicker reduction effect existence determination processing in step S203. In other words, the processing is executed on a pixel basis of the image to be corrected or on a pixel region basis configured by a plurality of pixels.
The correction parameter calculation unit 202 inputs
the following image characteristic amounts of the image to be corrected 50 from the image characteristic amount calculation unit 201.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
Moreover, the correction parameter calculation unit 202 inputs the following data described with reference to
(1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector,
from the storage unit (database) 150, and inputs the database storage data.
The correction parameter calculation unit 202 calculates a correction parameter 250 for reducing flicker of the image to be corrected 50, using the input data, and outputs the calculated correction parameter 250 to the image correction unit 203.
As described with reference to
illustrated in
The correction parameter calculation unit 202 calculates the following image correction parameters illustrated in
(C1) the temporal direction smoothing coefficient (Ft);
(C2) the spatial direction smoothing coefficient (Fs); and
(C3) the smoothing processing gain value (G).
The above three types of image correction parameters calculated by the correction parameter calculation unit 202 are input to the image correction unit 203 of the online processing unit 200 illustrated in
(Steps S205 and S206)
Next, in step S205, the online processing unit 200 executes the image correction processing to which the correction parameters calculated in step S204 have been applied, for the image to be corrected input in step S201, and outputs the corrected image to the display device in step S206.
This processing is the processing executed by the image correction unit 203 of the online processing unit 200 illustrated in
The image correction unit 203 executes the image correction processing for the image to be corrected 50, applying the following correction parameters 250 input from the correction parameter calculation unit 202.
(C1) The temporal direction smoothing coefficient (Ft)
(C2) The spatial direction smoothing coefficient (Fs)
(C3) The smoothing processing gain value (G)
The corrected image to which the above correction parameters have been applied and corrected is output to the display device 110 and displayed.
(Step S207)
Next, in step S207, the online processing unit 200 determines whether the processing for all the images to be corrected has been completed.
In the case where there is an unprocessed image, the processing of step S201 and the following steps is executed for the unprocessed image.
In a case where it is determined that the processing for all the images to be corrected has been completed, the processing is terminated.
Note that the correction parameters (C1) to (C3) applied in the image correction processing in step S205 are the correction parameters that produce the flicker reduction effect, and are the correction parameters that reflect the characteristics of the input image and the display device output characteristics.
Therefore, optimum flicker reduction processing according to characteristics of an image and characteristics of a display device becomes possible by the image correction to which the correction parameters are applied.
Next, the sequence of the processing example 2 executed by the online processing unit 200 will be described with reference to the flowchart illustrated in
As described with reference to
Note that the image correction processing in the online processing unit 200 is correction processing executed for the purpose of reducing flicker.
The processing example 2 illustrated in
For example, in the case of a liquid crystal display apparatus that drives a battery, such as a smart phone, a tablet terminal, or a portable PC, there is a demand to suppress the battery consumption as low as possible.
The processing example 2 to be described below is processing in response to such a demand, and is a processing example of confirming the battery remaining amount of the liquid crystal display apparatus, and cancelling or selecting the correction processing according to the remaining amount.
Note that the processing according to the flowchart illustrated in
Hereinafter, the processing of each step of the flowchart illustrated in
(Step S301)
First, in step S301, the online processing unit 200 inputs the image to be corrected.
(Steps S302 and S303)
Next, in step S302, the online processing unit 200 confirms the battery remaining amount of the liquid crystal display apparatus.
Further, in step S303, the online processing unit 200 determines whether the battery remaining amount is a predefined threshold value or more.
For example, the threshold value is a predefined value such as the battery remaining amount=25%.
(Steps S304 and S305)
In step S303, in the case where the battery remaining amount is determined to be the predefined threshold value or more, execution of the image correction processing is determined in step S304, and the processing in step S311 and subsequent steps is executed.
On the other hand, in step S303, in the case where the battery remaining amount is determined to be less than the predefined threshold value, cancellation of the image correction processing is determined in step S305, and the processing is terminated.
(Step S311)
In step S303, in the case where the battery remaining amount is determined to be the predefined threshold value or more, execution of the image correction processing is determined in step S304, and the processing in step S311 and subsequent steps is executed.
In step S311, the online processing unit 200 extracts the characteristic amounts of the image to be corrected.
This processing is the processing executed by the image characteristic amount calculation unit 201 of the online processing unit 200 illustrated in
The image characteristic amount calculation unit 201 acquires the following image characteristic amounts from the image to be corrected W50.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
“(1) The interframe luminance change amount: ΔYframe(n)” is a difference in image frame average luminance between two consecutive image frames.
“(2) The interline luminance change amount: ΔYline(n)” is a difference in pixel line average luminance between adjacent pixel lines in one image frame.
Note that the interline luminance change amount is calculated for each of a horizontal line and a vertical line.
“(3) The interframe motion vector: MVframe(in) (n)” is a motion vector indicating a motion amount between frames calculated from two consecutive image frames.
The image characteristic amount calculation unit 201 calculates these three types of image characteristic amounts, in other words, image characteristic amounts 210 illustrated in
(Step S312)
Next, in step S312, the online processing unit 200 selects one or more processing determined to have a high flicker reduction effect from the following processing on the basis of the image characteristic amounts extracted in step S311.
(a) Interframe luminance difference reduction processing
(b) Interline luminance difference reduction processing
(c) Luminance difference reduction processing according to a motion vector
For example, in step S311, the following characteristic amounts extracted from the image to be corrected, in other words:
(1) the interframe luminance change amount: ΔYframe(n);
(2) the interline luminance change amount: ΔYline (n); and
(3) the interframe motion vector: MVframe (n)
and the predefined threshold values Th1 to Th3 are compared. When the above characteristic amounts are equal to or larger than the threshold values, it is determined that there is the flicker reduction effect by the processing (a) to (c).
Specifically, for example, the following determination processing is performed.
(Determination Expression 1) The interframe luminance change amount: ΔYframe (n)≥Th1
In the case where
(Determination Expression 1) is satisfied,
it is determined that there is the flicker reduction effect by (a) the interframe luminance difference reduction processing.
Furthermore,
in the case where
(Determination Expression 2) the interline luminance change amount: ΔYline (n)≥Th2
is satisfied,
it is determined that there is the flicker reduction effect by (b) the interline luminance difference reduction processing.
Furthermore,
in the case where
(Determination Expression 3) the interframe motion vector: MVframe(n)≥Th3
is satisfied,
it is determined that there is the flicker reduction effect by (c) the luminance difference reduction processing according to a motion vector.
Note that these determination processes can be performed on a pixel basis of the image to be corrected or on a pixel region basis configured by a plurality of pixels.
As described above, in step S312, the online processing unit 200 selects one or more processing determined to have a high flicker reduction effect from the following processing on the basis of the image characteristic amounts extracted in step S311.
(a) Interframe luminance difference reduction processing
(b) Interline luminance difference reduction processing
(c) Luminance difference reduction processing according to a motion vector
(Step S313)
Next, in step S313, the online processing unit 200 determines whether there is a sufficient battery remaining amount to execute the processing selected from below as the processing having the flicker reduction effect in step S312, in other words:
(a) the interframe luminance difference reduction processing;
(b) the interline luminance difference reduction processing; and
(c) the luminance difference reduction processing according to a motion vector.
Note that the sufficient battery remaining amount to execute the selection processing is a predefined threshold remaining amount.
The threshold remaining amount may be set differently depending on the number of processes selected as the processing having the flicker reduction effect in step S312.
For example, in the case where the threshold value of a case where all the processing (a) to (c) are selected as the processing having the flicker reduction effect in step S312 is Tha, the threshold value of a case where two of the processing (a) to (c) are selected is Thb, and the threshold value of a case where one processing is selected is Thc, the threshold values can be set to satisfy the following relationship.
Tha>Thb>Thc
In step S313, when the online processing unit 200 determines that there is the sufficient battery remaining amount for executing all the processing selected as the processing having the flicker reduction effect in step S312, the processing proceeds to step S315.
On the other hand, when the online processing unit 200 determines that there is no sufficient battery remaining amount for executing all the selected processing, the processing proceeds to step S314.
(Step S314)
In step S314, when the online processing unit 200 determines that there is no sufficient battery remaining amount for executing all the selected processing in step S312, the processing proceeds to step S314.
In step S314, the online processing unit 200 executes processing of cancelling the image correction processing or processing of further narrowing down the selected processing in step S312. This narrowing down is executed as narrowing down processing that leaves the processing having a higher flicker reduction effect.
In step S314, in the case where cancellation of the image correction processing is determined, the processing is terminated without performing the image correction processing. In this case, an image without correction is output to the display device.
On the other hand, in the case where the selection processing of further narrowing down the selection processing in step S312 is executed, the selection processing by narrowing down is executed in step S315 and subsequent steps.
(Step S315)
Next, in step S315, the online processing unit 200 calculates the correction parameter to be applied to execute the processing selected from below as the processing having the flicker reduction effect in step S312, or the processing selected by narrowing down in step S314, in other words:
(a) the interframe luminance difference reduction processing;
(b) the interline luminance difference reduction processing; and
(c) the luminance difference reduction processing according to a motion vector.
This processing is the processing executed by the correction parameter calculation unit 202 of the online processing unit 200 illustrated in
Note that the calculation of the correction parameter is executed on a region basis, the region being targeted for flicker reduction effect existence determination processing in step S312. In other words, the processing is executed on a pixel basis of the image to be corrected or on a pixel region basis configured by a plurality of pixels.
The correction parameter calculation unit 202 inputs
the following image characteristic amounts of the image to be corrected 50 from the image characteristic amount calculation unit 201.
(1) An interframe luminance change amount: ΔYframe (n)
(2) An interline luminance change amount: ΔYline(n)
(3) An interframe motion vector: MVframe (n)
Moreover, the correction parameter calculation unit 202 inputs the following data described with reference to
(1) the input/output image characteristic amount change rate data corresponding to the interframe luminance change amount;
(2) the input/output image characteristic amount change rate data corresponding to the interline luminance change amount; and
(3) the input/output image characteristic amount change rate data corresponding to the interframe motion vector,
from the storage unit (database) 150, and inputs the database storage data.
The correction parameter calculation unit 202 calculates a correction parameter 250 for reducing flicker of the image to be corrected 50, using the input data, and outputs the calculated correction parameter 250 to the image correction unit 203.
As described with reference to
illustrated in
illustrated in
The correction parameter calculation unit 202 calculates the following image correction parameters illustrated in
(C1) the temporal direction smoothing coefficient (Ft);
(C2) the spatial direction smoothing coefficient (Fs); and
(C3) the smoothing processing gain value (G).
The above three types of image correction parameters calculated by the correction parameter calculation unit 202 are input to the image correction unit 203 of the online processing unit 200 illustrated in
(Steps S316 and S317)
Next, in step S316, the online processing unit 200 executes the image correction processing to which the correction parameters calculated in step S315 have been applied, for the image to be corrected input in step S301, and outputs the corrected image to the display device in step S317.
This processing is the processing executed by the image correction unit 203 of the online processing unit 200 illustrated in
The image correction unit 203 executes the image correction processing for the image to be corrected 50, applying the following correction parameters 250 input from the correction parameter calculation unit 202.
(C1) The temporal direction smoothing coefficient (Ft)
(C2) The spatial direction smoothing coefficient (Fs)
(C3) The smoothing processing gain value (G)
The corrected image to which the above correction parameters have been applied and corrected is output to the display device 110 and displayed.
(Step S318)
Next, in step S318, the online processing unit 200 determines whether the processing for all the images to be corrected has been completed.
In the case where there is an unprocessed image, the processing of step S301 and the following steps is executed for the unprocessed image.
In a case where it is determined that the processing for all the images to be corrected has been completed, the processing is terminated.
Note that the correction parameters (C1) to (C3) applied in the image correction processing in step S316 are the correction parameters that produce the flicker reduction effect, and are the correction parameters that reflect the characteristics of the input image and the display device output characteristics.
Therefore, optimum flicker reduction processing according to characteristics of an image and characteristics of a display device becomes possible by the image correction to which the correction parameters are applied.
Next, a hardware configuration example of the liquid crystal display apparatus will be described with reference to
A central processing unit (CPU) 301 functions as a control unit and a data processing unit that execute various types of processing according to a program stored in a read only memory (ROM) 302 or a storage unit 308. For example, the CPU 301 executes processing according to the sequence described in the above embodiment. A random access memory (RAM) 303 stores the program executed by the CPU 301, data, and the like. These CPU 301, ROM 302, and RAM 303 are mutually connected by a bus 304.
The CPU 301 is connected to an input/output interface 305 via the bus 304. An input unit 306 including various switches, a keyboard, a mouse, a microphone, and the like, through which the user can input commands, and an output unit 307 that executes data outputs to an display unit, a speaker, and the like are connected to the input/output interface 305. The CPU 301 executes various types of processing in accordance with a command input from the input unit 306, and outputs a processing result to the output unit 307, for example.
The storage unit 308 connected to the input/output interface 305 includes, for example, a hard disk and the like, and stores the program executed by the CPU 301 and various data. A communication unit 309 functions as a transmission/reception unit for Wi-Fi communication, Bluetooth (BT) communication, or another data communication via a network such as the Internet or a local area network, and communicates with an external device.
A drive 310 connected to the input/output interface 305 drives a removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory such as a memory card, and executes data recording or reading.
The embodiments of the present disclosure have been described in detail with reference to the specific embodiments. However, it is obvious that those skilled in the art can make modifications and substitutions of the embodiments without departing from the gist of the present disclosure. In other words, the present invention has been disclosed in the form of exemplification, and should not be restrictively interpreted. To judge the gist of the present disclosure, the section of claims should be taken into consideration.
Note that the technology disclosed in the present specification can have the following configurations.
(1) A liquid crystal display apparatus including:
a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device;
a characteristic amount extraction unit configured to extract a characteristic amount of an image to be corrected;
a correction parameter calculation unit configured to calculate a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate; and
an image correction unit configured to execute, for the image to be corrected, correction processing to which the correction parameter has been applied.
(2) The liquid crystal display apparatus according to (1), in which
the storage unit includes
the characteristic amount change rate between input/output sample images corresponding to a temporal change amount of at least one of characteristic amounts (1) to (3):
(1) an interframe luminance change amount;
(2) an interline luminance conversion amount; and
(3) an interframe motion vector,
the characteristic amount extraction unit extracts
at least one of the characteristic amounts (1) to (3) from the image to be corrected, and
the correction parameter calculation unit calculates
the correction parameter for reducing flicker on the basis of the one of the characteristic amounts (1) to (3) of the image to be corrected and the characteristic amount change rate of one of the characteristic amounts (1) to (3).
(3) The liquid crystal display apparatus according to (1) or (2), in which
the correction parameter calculation unit calculates
at least one of correction parameters (C1) to (C3):
(C1) a temporal direction smoothing coefficient;
(C2) a spatial direction smoothing coefficient; and
(C3) a smoothing processing gain value,
as the correction parameter for reducing flicker.
(4) The liquid crystal display apparatus according to any one of (1) to (3), in which
the correction parameter calculation unit calculates a temporal direction smoothing coefficient that is the correction parameter for reducing flicker on the basis of an interframe luminance change amount that is the characteristic amount of the image to be corrected.
(5) The liquid crystal display apparatus according to any one of (1) to (4), in which
the correction parameter calculation unit calculates a spatial direction smoothing coefficient that is the correction parameter for reducing flicker on the basis of an interline luminance change amount that is the characteristic amount of the image to be corrected.
(6) The liquid crystal display apparatus according to any one of (1) to (5), in which
the correction parameter calculation unit calculates a smoothing processing gain value that is the correction parameter for reducing flicker on the basis of an interframe motion vector that is the characteristic amount of the image to be corrected.
(7) The liquid crystal display apparatus according to any one of (1) to (6), in which
the characteristic amount extraction unit extracts the characteristic amount of the image to be corrected on a pixel basis or on a pixel region basis, and
the correction parameter calculation unit calculates the correction parameter for reducing flicker on a pixel basis of the image to be corrected or on a pixel region basis.
(8) The liquid crystal display apparatus according to any one of (1) to (7), in which
the image correction unit selects or cancels the correction processing to be executed for the image to be corrected according to a battery remaining amount of the liquid crystal display apparatus.
(9) The liquid crystal display apparatus according to any one of (1) to (8), further including:
an offline processing unit configured to calculate the characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device.
(10) The liquid crystal display apparatus according to (9), in which
the offline processing unit calculates the characteristic amount change rate between the input/output sample images corresponding to a temporal change amount of at least each of characteristic amounts (1) to (3):
(1) an interframe luminance change amount;
(2) an interline luminance conversion amount; and
(3) an interframe motion vector.
(11) The liquid crystal display apparatus according to (9) or (10), in which
the offline processing unit acquires information for acquiring the characteristic amount of the output sample image from a panel drive unit of the liquid crystal display device.
(12) A liquid crystal display apparatus including:
an offline processing unit configured to calculate a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device;
a storage unit configured to store the characteristic amount change rate calculated by the offline processing unit; and
an online processing unit configured to apply the characteristic amount change rate stored in the storage unit and execute correction processing of an image to be corrected, in which
the online processing unit includes
a characteristic amount extraction unit configured to extract a characteristic amount of the image to be corrected,
a correction parameter calculation unit configured to calculate a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate, and
an image correction unit configured to execute, for the image to be corrected, correction processing to which the correction parameter has been applied.
(13) The liquid crystal display apparatus according to (12), in which
the storage unit includes the characteristic amount change rate between input/output sample images corresponding to a temporal change amount of at least one of characteristic amounts (1) to (3):
(1) an interframe luminance change amount;
(2) an interline luminance conversion amount; and
(3) an interframe motion vector,
the characteristic amount extraction unit of the online processing unit extracts at least one of the characteristic amounts (1) to (3) from the image to be corrected, and
the correction parameter calculation unit calculates the correction parameter for reducing flicker on the basis of the one of the characteristic amounts (1) to (3) of the image to be corrected and the characteristic amount change rate of one of the characteristic amounts (1) to (3).
(14) The liquid crystal display apparatus according to (12) or (13), in which
the correction parameter calculation unit of the online processing unit calculates at least one of correction parameters (C1) to (C3):
(C1) a temporal direction smoothing coefficient;
(C2) a spatial direction smoothing coefficient; and
(C3) a smoothing processing gain value,
as the correction parameter for reducing flicker.
(15) A liquid crystal display control method executed in a liquid crystal display apparatus,
the liquid display apparatus including a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device,
the liquid crystal display control method including:
by a characteristic amount extraction unit, extracting a characteristic amount of an image to be corrected;
by a correction parameter calculation unit, calculating a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate; and
by an image correction unit, executing, for the image to be corrected, correction processing to which the correction parameter has been applied and outputting the image to be corrected on a display unit.
(16) A liquid crystal display control method executed in a liquid crystal display apparatus, the liquid crystal display control method including:
by an offline processing unit,
executing an offline processing step of calculating a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device, and storing the characteristic amount change rate in a storage unit; and
by an online processing unit,
extracting a characteristic amount of an image to be corrected,
calculating a correction parameter for reducing flicker on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate stored in the storage unit, and
executing, for the image to be corrected, correction processing to which the correction parameter has been applied, and displaying the image to be corrected on a display unit.
(17) A program for executing liquid crystal display control processing in a liquid crystal display apparatus,
the liquid display apparatus including a storage unit configured to store a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device,
the program generating a corrected image for a display unit output by executing:
characteristic amount extraction processing of an image to be corrected in a characteristic amount extraction unit;
processing of calculating a correction parameter for reducing flicker based on a characteristic amount of the image to be corrected and the characteristic amount change rate in a correction parameter calculation unit; and
correction processing to which the correction parameter has been applied for the image to be corrected in an image correction unit.
(18) A program for executing liquid crystal display control processing in a liquid crystal display apparatus, the program generating a corrected image for a display unit output by causing:
an offline processing unit to execute offline processing of calculating a characteristic amount change rate that is a change rate between a characteristic amount of a sample image and a characteristic amount of an output sample image with respect to a liquid crystal display device, and storing the characteristic amount change rate in a storage unit; and
an online processing unit to execute
characteristic amount extraction processing of an image to be corrected,
processing of calculating a correction parameter for reducing flicker based on the characteristic amount of the image to be corrected and the characteristic amount change rate stored in the storage unit, and
correction processing to which the correction parameter has been applied, for the image to be corrected.
Further, the series of processing described in the specification can be executed by hardware, software, or a combined configuration of the hardware and software. In the case of executing the processing by software, a program, which records the processing sequence, can be installed and executed in a memory in a computer incorporated in dedicated hardware, or the program can be installed and executed in a general-purpose computer capable of executing various types of processing. For example, the program can be recorded in a recording medium beforehand. Other than the installation from the recording medium to the computer, the program can be received via a network such as a local area network (LAN) or the Internet and can be installed to a recording medium such as a built-in hard disk.
Note that the various types of processing described in the specification may be executed not only in chronological order as described but also in parallel or individually depending on the processing capability of the device executing the process or as required. Furthermore, the system in the present specification is a logical aggregate configuration of a plurality of devices, and is not limited to devices having respective configurations within the same housing.
As described above, the configuration of one embodiment of the present disclosure, the effective image correction processing for reducing flicker according to the characteristics of the images is executed, and the flicker of the image to be displayed on the liquid crystal display apparatus can be effectively reduced.
Specifically, characteristic amount change rate data which is the change rate between the characteristic amount of the sample image and the characteristic amount of the sample image output to the liquid crystal display device is acquired in advance and stored in the storage unit. The correction parameter for reducing flicker is calculated on the basis of the characteristic amount of the image to be corrected and the characteristic amount change rate data of the sample images stored in the storage unit. The correction processing to which the calculated correction parameter has been applied is executed for the image to be corrected to generate a display image. As the characteristic amount, for example, the interframe luminance change amount, the interline luminance conversion amount, or the interframe motion vector is used.
With the configuration, the effective image correction processing for reducing flicker according to the characteristics of images is executed, and the flicker of the image to be displayed on the liquid crystal display apparatus can be effectively reduced.
Number | Date | Country | Kind |
---|---|---|---|
JP2016-065533 | Mar 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/007464 | 2/27/2017 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/169436 | 10/5/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
9966003 | Cho | May 2018 | B2 |
20060132659 | Kimura et al. | Jun 2006 | A1 |
20070126757 | Itoh et al. | Jun 2007 | A1 |
20080284931 | Kimura | Nov 2008 | A1 |
20090102783 | Hwang | Apr 2009 | A1 |
Number | Date | Country |
---|---|---|
1783181 | Jun 2006 | CN |
1918619 | Feb 2007 | CN |
101303830 | Nov 2008 | CN |
101308301 | Nov 2008 | CN |
101409046 | Apr 2009 | CN |
102289122 | Dec 2011 | CN |
1667094 | Jun 2006 | EP |
1727119 | Nov 2006 | EP |
2003-022044 | Jan 2003 | JP |
2004-306831 | Nov 2004 | JP |
2005-266752 | Sep 2005 | JP |
2005-266758 | Sep 2005 | JP |
2006-184843 | Jul 2006 | JP |
2008-058483 | Mar 2008 | JP |
2008-145644 | Jun 2008 | JP |
2008-184843 | Aug 2008 | JP |
2008-287021 | Nov 2008 | JP |
2011-164471 | Aug 2011 | JP |
10-2006-0063709 | Jun 2006 | KR |
10-2006-0105598 | Oct 2006 | KR |
10-2006-0123780 | Dec 2006 | KR |
200905346 | Feb 2009 | TW |
2005081217 | Sep 2005 | WO |
Entry |
---|
International Search Report and Written Opinion of PCT Application No. PCT/JP2017/007464, dated May 9, 2017, 09 pages of ISRWO. |
Office Action for JP Patent Application No. 2018-508810, dated Mar. 2, 2021, 3 pages of Office Action and 3 pages of English Translation. |
Number | Date | Country | |
---|---|---|---|
20200302881 A1 | Sep 2020 | US |