1. Field of the Invention
The present invention relates to image processing apparatuses, methods, and programs, and in particular to an image processing apparatus, method, and program capable of easily detecting a motion vector of a pixel matching those of neighboring pixels.
2. Description of the Related Art
Cathode Ray Tubes (CRTs) are typical known moving-image display apparatuses. These days, a growing number of liquid crystal display (LCD) apparatuses are used in addition to CRTs, as described in Japanese Unexamined Patent Application Publication No. 2002-219811.
When a CRT receives a command for displaying one of a plurality of frames constituting a moving image, it sequentially scans a plurality of horizontal lines (scanning lines) constituting the CRT screen with a built-in electron gun to form the specified frame (hereinafter, the frame serving as a target of the display command is referred to as the target display frame) on the screen.
In this case, each of the pixels constituting the target display frame is displayed like an impulse in the time direction. In other words, each pixel is activated only at the moment it is shot by the scanning electron gun. Hereinafter, display apparatuses employing the same display method as that employed by CRTs are collectively referred to as impulse display apparatuses.
In contrast, when an LCD apparatus is to display one of the frames constituting a moving image (target display frame) on the screen, it maintains the illumination of all liquid crystal regions constituting the screen from when the command for displaying the target frame is issued until when the command for displaying the subsequent frame is issued.
Assume that each pixel corresponds to one liquid crystal region. In this case, the frame display command causes the pixel value of each of the pixels constituting the target display frame to be transferred to the LCD apparatus. The LCD apparatus applies voltages of levels representing the specified pixel values to the respective liquid crystal regions (pixels) constituting the screen. As a result, each of the liquid crystal regions outputs light according to the applied voltage. In short, the level of the light output from a liquid crystal region corresponds to the level of the voltage applied to the liquid crystal region.
Thereafter, at least until a command for displaying the subsequent frame is issued, the specified voltage levels are continuously applied to the liquid crystal regions, which thus keeps on outputting the respective levels of light. In other words, the liquid crystal regions continue to display pixels with the specified pixel values.
When the pixel values of some pixels need to be changed as a result of a command for displaying the subsequent frame being issued, voltages of levels corresponding to the changed pixel values are applied to the liquid crystal regions corresponding to the relevant pixels (i.e., the voltage levels applied to the relevant liquid crystal regions are changed), and hence the output levels (levels of light) of the liquid crystal regions also change.
As described above, LCD apparatuses employ a display method different from that employed by impulse display apparatuses such as CRTs, and have several advantages over impulse display apparatuses, such as small installation space, low power consumption, and high resistance to distortion.
LCD apparatuses, however, have a first problem in that they experience more frequent occurrence of motion blurring than impulse display apparatuses when a moving image is displayed.
It has been believed that the occurrence of this first problem, i.e., the occurrence of motion blurring, in LCD apparatuses is caused by the low response speed of liquid crystal. More specifically, LCD apparatuses have been believed to undergo motion blurring because the output level of each of the liquid crystal regions takes a long time to reach the specified target level (e.g., the level corresponding to the specified pixel value, if each of the liquid crystal regions corresponds to one pixel).
In order to overcome this first problem, i.e., to prevent motion blurring from occurring in LCD apparatuses, Japanese Unexamined Patent Application Publication No. 2002-219811 describes the following method. According to the method described in Japanese Unexamined Patent Application Publication No. 2002-219811, a voltage of higher level than the target level is applied to each of the liquid crystal regions (pixels). Hereinafter, this method is referred to as the overdrive method. In other words, the overdrive method employs a level higher than the target level employed by the conventional method. In this sense, the overdrive method is a method for correcting the target level.
Unfortunately, this overdrive method cannot prevent the occurrence of motion blurring. Thus, this first problem remains unsolved because there are no effective methods available for preventing motion blurring in LCD apparatuses.
In view of these circumstances, the applicant of the present invention has investigated the reason why the known overdrive method cannot overcome the first problem, that is, the reason why motion blurring in LCD apparatuses cannot be prevented from occurring. Based on the result of this investigation, the applicant of the present invention has succeeded in inventing an image processing apparatus capable of solving the first problem. This invention was filed by the applicant of the present invention (Japanese Patent Application No. 2003-270965).
As described above, one of the causes of motion blurring in LCD apparatuses is the low response speed of liquid crystal (pixels), and the overdrive method takes this low response speed into consideration.
However, the occurrence of motion blurring in LCD apparatuses is caused by not only the low response speed of liquid crystal, but also the characteristics associated with human vision (observers of LCD apparatuses), called “follow-up seeing”. The applicant of the present invention has found that the known overdrive method cannot prevent motion blurring from occurring because it takes no account of this follow-up seeing. The term “follow-up seeing” indicates a vital reaction characterized in that human beings unintentionally follow moving objects with their eyes. It is also called “afterimages on retina”.
In other words, the applicant of the present invention has concluded that the known overdrive method, in which the pixel values of all pixels (the voltage levels for all liquid crystal regions) for displaying a moving object are corrected, that is, only the time response of the output levels of the liquid crystal regions is improved, cannot eliminate motion blurring due to the characteristics of follow-up seeing associated with human vision.
In view of this problem, the applicant of the present invention has invented an image processing apparatus, as described above, for carrying out image processing that takes into account not only the low response of liquid crystal but also the characteristics of follow-up seeing.
More specifically, an image processing apparatus invented by the applicant of the present invention prevents the occurrence of motion blurring due to the follow-up seeing by correcting the pixel value of a pixel to be processed from among the pixels constituting the target display frame (hereinafter, referred to as the pixel of interest) according to the motion vector (direction and magnitude) of the pixel of interest, if the pixel of interest exists at an edge of the moving object.
In this case, however, a second problem occurs such that if the motion vector of the pixel of interest does not match the motion vectors of pixels neighboring the pixel of interest (i.e., if a motion vector matching the surroundings is not used), the pixel value (correction value) of the pixel of interest on the resultant correction image (the target display frame) does not match the pixel values (correction values) of its neighboring pixels. In other words, the second problem can be restated as the pixel of interest on the resultant correction image mismatching its neighboring pixels, leading to low image quality.
Although the first and second problems have been described as involved with LCD apparatuses, they are generally involved with any display apparatus having the following characteristics, as well as LCD apparatuses. In more detail, display apparatuses exhibiting the first and second problems are characterized in that they have a plurality of display elements requiring a predetermined time from when the target level has been specified to when the output level reaches the target level and that each of the display elements is associated with at least some of the pixels constituting a frame or a field.
Many of the display apparatuses with such characteristics employ a display method in which the illumination of at least some of the display elements constituting the screen is maintained for a predetermined period of time after the display of a predetermined frame or field is specified (e.g., for a period of time until the display of the subsequent frame or field is specified). Hereinafter, display apparatuses employing this display method, such as LCD apparatuses, are collectively referred to as hold-type display apparatuses. Furthermore, the display with display elements (liquid crystal for LCD apparatuses) constituting the screen of hold-type display apparatuses is referred to as a hold-display. Thus, the first problem and the second problem can be regarded as problems associated with hold-type display apparatuses.
The present invention is conceived in light of the above-described circumstances, and is intended to easily detect a motion vector of a pixel matching those of neighboring pixels.
A first image processing apparatus according to the present invention includes: a candidate generating device for setting as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining device for determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated by the candidate generating device; a luminance-change calculating device for calculating a degree of change in luminance around the pixel of interest; and a correction device for evaluating a confidence level of the motion vector determined by the motion-vector determining device based on results of processing by the luminance-change calculating device and the candidate generating device, and correcting the motion vector if it is determined that the confidence level is low.
If the degree of change in luminance calculated by the luminance-change calculating device is below a threshold, the correction device may determine that the confidence level of the motion vector is low and correct the motion vector.
The candidate generating device may detect a first pixel on the first access unit as a counterpart pixel for a second pixel on the second access unit, the second pixel being arranged at a location corresponding to the location of the pixel of interest, and generate a vector originating from the pixel of interest and terminating at the first pixel as the candidate motion vector at the pixel of interest.
If it is determined that the first access unit includes a plurality of candidates for the counterpart pixel or that the confidence level of the first pixel being the counterpart pixel is low, the candidate generating device may provide the correction device with first information indicating a command for correcting the motion vector. Furthermore, if the correction device receives the first information from the candidate generating device, the correction device may determine that the confidence level of the motion vector is low and correct the motion vector.
If the pixel of interest is included in the plurality of candidates for the counterpart pixel, the candidate generating device may provide the correction device with second information indicating that the pixel of interest is included in the plurality of candidates for the counterpart pixel, and if the correction device receives the second information from the candidate generating device, the correction device may determine that the confidence level of the motion vector is low and correct the motion vector to a 0 vector.
A first image processing method according to the present invention includes: a candidate generating step of setting as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining step of determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated in the candidate generating step; a luminance-change calculating step of calculating a degree of change in luminance around the pixel of interest; and a correction step of evaluating a confidence level of the motion vector determined in the motion-vector determining step based on results of processing in the luminance-change calculating step and the candidate generating step, and correcting the motion vector if it is determined that the confidence level is low.
A first computer-executable program according to the present invention sets as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and applies image processing to the pixel of interest. The program includes: a candidate generating step of comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining step of determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated in the candidate generating step; a luminance-change calculating step of calculating a degree of change in luminance around the pixel of interest; and a correction step of evaluating a confidence level of the motion vector determined in the motion-vector determining step based on results of processing in the luminance-change calculating step and the candidate generating step, and correcting the motion vector if it is determined that the confidence level is low.
According to the first image processing apparatus, the first image processing method, and the first computer-executable program, a predetermined pixel from among pixels constituting a first access unit is set as a pixel of interest, and image processing is applied to the pixel of interest. More specifically, the first access unit is compared with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest. In addition to the candidate motion vector at the pixel of interest, candidate motion vectors at pixels neighboring the pixel of interest are generated. A candidate motion vector with highest frequency from among these candidate motion vectors is determined as a motion vector at the pixel of interest. A confidence level of the determined motion vector is evaluated based on a degree of change in luminance around the pixel of interest and the result of processing for generating the motion vectors. The motion vector is corrected if it is determined that the confidence level is low.
A second image processing apparatus according to the present invention includes: a candidate generating device for setting as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining device for determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated by the candidate generating device; a correction device for correcting the motion vector determined by the motion-vector determining device; and a processing executing device for carrying out predetermined processing using the motion vector corrected by the correction device. The correction device corrects the motion vector by a first method based on a characteristic of the predetermined processing by the processing executing device.
The image processing apparatus may further include a luminance-change calculating device for calculating a degree of change in luminance around the pixel of interest. The correction device may evaluate a confidence level of the motion vector determined by the motion-vector determining device based on results of processing by the luminance-change calculating device and the candidate generating device and, if it is determined that the confidence level of the motion vector is low, the correction device may further correct the motion vector corrected by the first method by a second method.
A second image processing method according to the present invention is executed by an information processing apparatus. The method includes: a candidate generating step of setting as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining step of determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated in the candidate generating step; a correction step of correcting the motion vector determined in the motion-vector determining step; and a processing control step of controlling the information processing apparatus to carry out predetermined processing using the motion vector corrected in the correction step. In the correction step, the motion vector is corrected by a first correction method based on a characteristic of the predetermined processing by the image processing apparatus in the processing control step.
A second computer-executable program according to the present invention controls a processing executing apparatus for applying predetermined processing to a predetermined one of a plurality of access units constituting a moving image, wherein the predetermined processing uses a motion vector at each of pixels on the predetermined access unit. The program includes: a candidate generating step of setting as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and comparing the first access unit with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest; a motion-vector determining step of determining as a motion vector at the pixel of interest a candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest generated in the candidate generating step; a correction step of correcting the motion vector determined in the motion-vector determining step; and a processing control step of controlling the processing executing apparatus to carry out the predetermined processing using the motion vector corrected in the correction step. In the correction step, the motion vector is corrected by a first correction method based on a characteristic of the predetermined processing by the processing executing apparatus in the processing control step.
According to the second image processing apparatus, the second image processing method, and the second computer-executable program, a predetermined pixel from among pixels constituting a first access unit is set as a pixel of interest and the first access unit is compared with a second access unit preceding the first access unit to generate a candidate motion vector at the pixel of interest. A candidate motion vector with highest frequency from among the candidate motion vector at the pixel of interest and candidate motion vectors at pixels neighboring the pixel of interest is determined as a motion vector at the pixel of interest. The determined motion vector is corrected, and predetermined processing is carried out using the corrected motion vector. At the time, the motion vector is corrected by a first method based on a characteristic of the predetermined processing.
As described above, according to the present invention, the motion vector of a pixel of interest can be detected for image processing for preventing motion blurring (particularly, motion blurring due to follow-up seeing of human vision) from occurring in hold-type display apparatuses such as LCD apparatuses. In particular, a motion vector of a pixel matching those of neighboring pixels can be easily detected.
Before embodiments according to the present invention are described, the relationships between elements of subjects described in claims and components described below in the form of embodiments will be provided. Some examples described in the form of embodiments may implicitly correspond to elements described in claims. In other words, some examples described below may not have explicit corresponding elements described in claims. In contrast, some examples described below as explicitly corresponding to elements in claims may correspond to elements other than those explicitly corresponding elements.
Not all examples described below may be reflected in claims as the invention. In other words, some examples described below may suggest inventions not included in claims, e.g., inventions filed in the form of division application or inventions added through amendment.
According to the present invention, a first image processing apparatus is provided. The first image processing apparatus (e.g., a motion detecting section 14 of an image processing apparatus 1 in
If the degree of change in luminance calculated by the luminance-change calculating means is below a threshold (e.g., the third condition requiring that Inequality (9) shown below be established is satisfied), the correction means may determine that the confidence level of the motion vector is low and correct the motion vector.
The candidate generating means may detect a first pixel (e.g., any pixel in the search range i−6 to i+6 in
If it is determined that the first access unit includes a plurality of candidates for the counterpart pixel (e.g., if the pixels at the search location i+5 and the search location i-i are candidate counterpart pixels as shown in
If the pixel of interest is included in the plurality of candidates for the counterpart pixel (e.g., if the search location pos2 corresponding to the second smallest SAD, i.e., the minimal value min2, is the location i of the pixel of interest as shown in
According to the present invention, a first image processing method is provided. The first image processing method (e.g., image processing method by the motion detecting section 14 in
According to the present invention, a first computer-executable program is provided. The first program sets as a pixel of interest a predetermined pixel from among pixels constituting a first access unit and causes a computer (e.g., a CPU 201 in
According to the present invention, a second image processing apparatus is provided. The second image processing apparatus (e.g., the motion detecting section 14 and an image processing section 12 of an image processing apparatus 1 in
The image processing apparatus may further include luminance-change calculating means (e.g., the luminance-gradient detecting section 33 in
According to the present invention, a second image processing method is provided. The second image processing method is executed by an information processing apparatus (e.g., the image processing apparatus 1 in
According to the present invention, a second computer-executable program is provided. The second program is executed by a computer (e.g., the CPU 201 in
An image processing apparatus to which the present invention is applied will now be described with reference to the drawings.
Referring to
Although the image processing apparatus 1 processes (displays) a moving image in units of frames according to this embodiment as described above for the sake of simplified description, a moving image may be processed or displayed in units of fields. In other words, the image processing apparatus 1 is capable of performing image processing in access units, which are defined as units for image processing, such as frames and fields, in the present specification. In the following description, the access unit employed by the image processing apparatus 1 is presumed to be a frame.
Furthermore, it is also presumed that the hold-type display apparatus 2 causes each of display elements to display the corresponding pixel of a plurality of pixels constituting a first frame for a predetermined time from when the display of the first frame is specified to when the display of the subsequent second frame is specified, and retains (holds) the display of at least some of the display elements (hold-displays the display elements).
In more detail, the image processing apparatus 1 sequentially receives the image data of a plurality of frames constituting a moving image. In other words, the image processing apparatus 1 receives the image data of the target display frame (e.g., the pixel values of all pixels constituting the target display frame). More specifically, the image data of the target display frame is input to an image processing section 11, an image processing section 12, a reference-image storing section 13, and a motion detecting section 14.
The image processing section 11 applies predetermined image processing to the image data of the target display frame, one pixel at a time, and outputs the processed image data to a switching section 15. More specifically, the image processing section 11 applies predetermined image processing to each of a plurality of pixels constituting the target display frame to correct the pixel values of these pixels and sequentially outputs the corrected pixel values to the switching section 15 in a predetermined order.
The image processing carried out by the image processing section 11 is not limited to particular processing. In the example of
Furthermore, the image processing section 11 is not a component essential to the image processing apparatus 1, and thus can be omitted. If this is the case, the image data of the target display frame is input to the image processing section 12, the reference-image storing section 13, the motion detecting section 14, and an input end of the switching section 15 (of the two input ends of the switching section 15, the input end connected to the image processing apparatus 11).
From among the image data (pixel values of pixels) constituting the target display frame, the image processing section 12 can perform correction (including 0 correction) of the pixel values of pixels corresponding to a moving object (e.g., pixels whose motion vectors detected by the motion detecting section 14 have a magnitude equal to a threshold or more), and outputs the corrected pixel values to the switching section 15.
The image processing section 12 can correct the pixel values of pixels corresponding to an object moving in any spatial direction on the target display frame. In the following description, the pixel at the upper-left corner of the target display frame is defined as a reference pixel for the sake of simplified description. Under this definition, the present invention presumes that an object moving in the horizontal direction to the right of the reference pixel (hereinafter, referred to as the spatial direction X) or in the direction opposite to the spatial direction X is processed by the image processing section 12. Accordingly, although the direction of the motion vector of a pixel of interest detected by the motion detecting section 14 according to the present invention is not limited to a particular spatial direction on the target frame, the following description presumes that the motion detecting section 14 detects a pixel of interest whose motion vector has a direction equal to the spatial direction X or the opposite direction to make the description more understandable.
The image processing section 12 includes a step-edge detecting section 21 and a correcting section 22.
The step-edge detecting section 21 detects pixels corresponding to an edge portion of a moving object from among the image data of the target display frame based on the detection result (motion vector) supplied by the motion detecting section 14, and supplies the detected pixels to the correcting section 22.
More specifically, the step-edge detecting section 21, which, say, captures a step edge as an object, disassembles the image data of the target display frame into image data items of step edges arranged in the spatial direction X, detects the pixels corresponding to an edge portion in each of the step edges, and supplies the detected pixels to the correcting section 22.
Here, a step edge is a collection of two different groups of pixels continuously arranged in a row: one group of pixels has a first pixel value and is arranged in a predetermined direction (spatial direction X in this example), and the other group of pixels has a second pixel value different from the first pixel value and is arranged in the same direction.
The step-edge detecting section 21 calculates the difference between the pixel value of a pixel of interest and the pixel value of a pixel neighboring the pixel of interest in a predetermined direction (the spatial direction X or the opposite direction in this example). Then, if the calculation result (difference value) is not, for example, 0, the step-edge detecting section 21 determines the pixel of interest as a pixel corresponding to the edge portion of the step edge.
Since the main object here is to suppress motion blurring, it is sufficient to detect the edge portion of a moving step edge only.
For this purpose, if the magnitude of the motion vector of the pixel of interest supplied by the motion detecting section 14 is equal to, for example, a threshold or more, the step-edge detecting section 21 determines that the step edge including the pixel of interest as one component is moving and carries out processing. More specifically, the step-edge detecting section 21 calculates the difference value between the pixel value of the pixel of interest and the pixel value of its neighboring pixel (hereinafter, referred to just as the difference value), and supplies as a detection result the difference value and the pixel value of the pixel of interest to the correcting section 22.
In contrast, if the magnitude of the motion vector of the pixel of interest supplied by the motion detecting section 14 is, for example, less than the threshold, the step-edge detecting section 21 determines that the step edge including the pixel of interest as one component is not moving and cancels the processing.
If the magnitude of the motion vector of the pixel of interest supplied by the motion detecting section 14 is equal to, for example, the threshold or more (i.e., the step edge including the pixel of interest as one component is moving in the spatial direction X or in the opposite direction), the correcting section 22 performs correction (including 0 correction) of the pixel value of the pixel of interest supplied by the step-edge detecting section 21. At this time, the correcting section 22 corrects the pixel value of the pixel of interest based on the motion vector (moving direction and amount of motion of the step edge) of the pixel of interest supplied by the motion detecting section 14 and the difference value (height of the step edge) supplied by the step-edge detecting section 21.
More specifically, for example, if the supplied difference value is not 0 and the magnitude of the supplied motion vector is equal to the threshold or more, then the correcting section 22 determines that the pixel of interest is the pixel corresponding to the edge portion of a moving step edge, and corrects the pixel value of the pixel of interest based on the supplied difference value and motion vector.
In contrast, for example, if the supplied difference value is 0 and the magnitude of the supplied motion vector is equal to the threshold or more, the correcting section 22 determines that the pixel of interest is one component of a moving step edge but is not the pixel corresponding to the edge portion (i.e., the pixel of interest is a pixel other than the pixel corresponding to the edge portion), and carries out 0 correction of the pixel of interest, that is, does not correct the pixel of interest.
Furthermore, for example, if the magnitude of the supplied motion vector is less than the threshold, the correcting section 22, as in the step-edge detecting section 21, determines that the step edge including the pixel of interest as one component is not moving, and thus cancels the processing (cancels correction processing including 0 correction).
The correcting section 22 can employ any pixel-value correction method, as long as the pixel value of the pixel of interest corresponding to the edge portion of a moving step edge is corrected based on the motion vector of the pixel of interest detected by the motion detecting section 14. More specifically, the correcting section 22 can employ, for example, the following correction method.
The correcting section 22 calculates the right-hand side of Equation (1) shown below to obtain a correction value R as the left-hand side, and corrects the pixel value of the pixel of interest by adding the calculated correction value R to the pixel value of the pixel of interest.
In Equation (1), Er represents the difference value supplied by the step-edge detecting section 21, and V represents the magnitude of the motion vector supplied by the motion detecting section 14. Equation (1) presumes that the time response of all display elements of the hold-type display apparatus 2 (e.g., all liquid crystal regions if the hold-type display apparatus 2 is an LCD apparatus) is a first-order lag element with a certain time constant, which is represented with τ in Equation (1). Furthermore, T in Equation (1) represents the time for which the target display frame is displayed (period of time from when the display of the target display frame is specified to when the display of the subsequent frame is specified). Hereinafter, the time T is referred to as the frame time T. In LCD apparatuses, the frame time T is typically 16.6 ms.
The reference-image storing section 13 stores the image data of the target display frame for use as image data of a reference image for the subsequent frame.
More specifically, if the image data of a new frame is input as image data of the target display frame, the motion detecting section 14 (and the above-described image processing section 11) acquires the image data of the previous frame (frame that was the target display frame just before the current processing) stored in the reference-image storing section 13 for use as the image data of a reference image for the current target display frame. Then, the motion detecting section 14 compares the image data of the target display frame with the image data of the reference image to detect the motion vector of the pixel of interest on the target display frame, and supplies it to the image processing section 11, the image processing section 12 (the step-edge detecting section 21 and the correcting section 22), and the switching section 15.
In fact, the motion detecting section 14 can detect a motion vector with any direction on a two-dimensional plane parallel to the spatial direction X and a spatial direction Y. In short, the direction of a motion vector can be any direction on this two-dimensional plane. In this example, however, it is presumed that only step edges moving in the spatial direction X or in the opposite direction are detected, as described above, and hence the motion detecting section 14 only detects a pixel of interest whose motion vector has a direction equal to the spatial direction X or the opposite direction to make the description more understandable.
More specifically, if a step edge moves by, for example, N (N is any positive integer) pixels in the spatial direction X from one frame to the subsequent frame, the motion detecting section 14 detects “+N” as the motion vector (the motion vector at the pixel of interest, which is one component of the step edge) of the step edge. On the other hand, if the step edge moves by N pixels in the direction opposite to the spatial direction X from one frame to the subsequent frame, the motion detecting section 14 detects “−N” as the motion vector of the step edge. In this example, the direction of a motion vector is denoted with “+” if the step edge moves in the spatial direction X, and the direction of a motion vector is denoted with “−” if the step edge moves in the direction opposite to the spatial direction X, as described above.
The switching section 15 switches the input according to the detection result (motion vector) supplied by the motion detecting section 14.
More specifically, if the magnitude of the motion vector of the pixel of interest supplied by the motion detecting section 14 is less than a threshold (if the pixel of interest is not a pixel included in a moving step edge), the switching section 15 switches the input to the image processing section 11 and supplies the data (pixel value) of the pixel of interest supplied by the image processing section 11 to a display control section 16.
In contrast, if the magnitude of the motion vector of the pixel of interest supplied by the motion detecting section 14 is equal to the threshold or more (if the pixel of interest is a pixel included in a moving step edge), the switching section 15 switches the input to the correcting section 22 of the image processing section 12, and supplies the data (pixel value) of the pixel of interest supplied by the correcting section 22 to the display control section 16.
The display control section 16 converts the data (pixel value) of each of the pixels constituting the target display frame, i.e., the pixels sequentially supplied by the switching section 15, into a predetermined signal format (signal indicating the target level for the corresponding display element in the hold-type display apparatus 2), and then outputs it to the hold-type display apparatus 2. In short, the display control section 16 carries out this processing to issue a command for displaying the target display frame on the hold-type display apparatus 2.
As described above, according to the image processing apparatus 1 of this embodiment, the pixel value of the pixel of interest is corrected based on the motion vector detected by the motion detecting section 14. It should be noted here that this motion vector matches the motion vectors of other pixels neighboring the pixel of interest. In other words, in order to easily detect the motion vector of a pixel matching those of neighboring pixels, that is, in order to solve the above-described second problem, the motion detecting section 14 according to this embodiment has a structure as shown in
Referring to
An input image (image data), that is, the image data of the target display frame is supplied via the LPF 31 to the luminance-gradient detecting section 33 and the template matching section 34.
The luminance-gradient detecting section 33 detects the luminance gradient at the location of a pixel of interest on the target display frame.
The term “luminance gradient” is defined as the following value. Assuming function f(x) that receives a coordinate value in a predetermined direction (e.g., coordinate value x in the spatial direction X in this example) as a parameter and outputs the luminance (pixel value) of the input coordinate x, the luminance gradient at location i (i is a coordinate value in the spatial direction X) of a pixel is defined as the absolute value of the first derivative of function f(x) at the location i. In short, the luminance gradient at the location i is defined as |f′(i)|.
The generation of function f(x) itself and the calculation of the first derivative of function f(x) require the luminance-gradient detecting section 33 to perform computationally intensive processing. For this reason, for example, a value “slope” in Equation (2) shown below is defined as the luminance gradient. In short, the luminance-gradient detecting section 33 calculates the right-hand side of Equation (2) to obtain the luminance gradient “slope”. As a result, the luminance-gradient detecting section 33 can easily (with light load) calculate the luminance gradient “slope” at the location of a pixel of interest (e.g., a pixel 51 shown in
slope=max(|Yi−Yi−1|,|Yi−Yi+1|) (2)
In Equation (2), Yi indicates the luminance (pixel value) of the pixel of interest 51 at the location i, as shown in
The luminance-gradient detecting section 33 calculates the absolute difference A between the luminance Yi of the pixel of interest 51 and the luminance Yi−1 of the pixel 52 to the left and the absolute difference B between the luminance Yi of the pixel of interest 51 and the luminance Yi+1 of the pixel 53 to the right. The luminance-gradient detecting section 33 then supplies the absolute difference A or B, whichever is larger (maximum value), to the motion-vector correcting section 36 as the luminance gradient “slope” at the location of the pixel of interest 51.
As described above, the image data of the target display frame is also supplied to the template matching section 34 via the LPF 31. At this time, the template matching section 34 further receives via the LPF 32 the image data of the previous frame (previous target display frame) stored in the reference-image storing section 13 as the image data of a reference image for the target display frame.
The template matching section 34 then extracts from the previous target display frame a predetermined area (hereinafter, referred to as a window) including at least the location corresponding to the location of the pixel of interest, determines the area on the current target display frame that matches the extracted window, determines a candidate motion vector “pvec” of the pixel of interest based on the matching result, and finally supplies it to the histogram section 35.
Furthermore, the template matching section 34 generates a control signal “flag” used by the motion-vector correcting section 36 and supplies it to the motion-vector correcting section 36. Details of the control signal “flag” will be described later.
Details of this template matching section 34 will now be described with reference to FIGS. 4 to 10.
Referring to
Referring to
Furthermore, the SAD calculating section 61 extracts the following area (referred to as a comparison area, as distinct from a window) from the target display frame. More specifically, a comparison area 73−n composed of a predetermined number of pixels (the same number of pixels as that of the window 72, which is five in the example of
The SAD calculating section 61 calculates the correlation between the window 72 and each of the 13 comparison areas 73−n (the comparison area 73−(−6) to the comparison area 73−(+6)) using a predetermined evaluation function, and supplies the calculation results to the SAD-minimal-value detecting section 62.
The evaluation function used by the SAD calculating section 61 is not limited to a particular function. The SAD calculating section 61 can use a function such as a normalized correlation function and an SSD (Sum of Squared Difference) function. In the following description, the SAD calculating section 61 is presumed to use a SAD.
In short, the SAD calculating section 61 calculates the right-hand side of Equation (3) shown below to obtain a correlation value SAD(j) (j is i+n, and is any integer in the search range i−6 to i+6 in the example of
As shown in
More specifically, assume that, as shown in
Referring back to
The minimum value “min” corresponds to the correlation value SAD(z) at the minimal point (z, SAD(z)) (here, z is one of integers from i−6 to i+6) of the curve connecting the correlation values SAD(j). More specifically, in the example of
The above-described processing by the SAD calculating section 61 and the SAD-minimal-value detecting section 62 may be outlined as follows. The SAD calculating section 61 detects the pixel 71 disposed at the location i on the previous frame corresponding to the location of the pixel of interest 51, and extracts the window 72 including at least the pixel 71. The SAD calculating section 61 then sequentially shifts the window 72 along the horizontal line (line parallel to the spatial direction X) including the pixel of interest 51 on the target display frame. At this time, the SAD calculating section 61 uses a predetermined evaluation function (SAD in this example) to calculate the degree of coincidence (correlation) between the window 72 and the area 73−n overlapping the window 72 (comparison area) at each of the shift locations (search locations) i+n of the window 72. The SAD-minimal-value detecting section 62 then sets the center pixel in the comparison area 73−n having the highest degree of coincidence (correlation) (i.e., the minimal value “min”) as the counterpart pixel for the center pixel 71 (i.e., the pixel 71 corresponding to the pixel of interest 51) of the window 72. More specifically, from among the pixels constituting the target display frame, the pixel arranged at the search location “pos” corresponding to the minimal value “min” is set as the counterpart pixel for the center pixel 71 of the window 72.
The technique carried out by the SAD calculating section 61 and the SAD-minimal-value detecting section 62 is referred to as, for example, a window matching method, an area-based matching method, or a template matching method. The counterpart pixel is also called the counterpart point.
Strictly speaking, the histogram section 35 is supplied with a candidate motion vector “pvec” which is a vector n originating from the location i of the pixel of interest 51 and terminating at the search location pos (=i+n) (n indicates the amount of shift and the shift direction with respect to the reference location i of the window 72). In this example, as described above, the absolute value of the value n indicates the magnitude of the vector (candidate motion vector “pvec”) and the sign of the value n indicates the direction of the vector (candidate motion vector “pvec”). In more detail, if the value n is positive, the direction of the candidate motion vector “pvec” is the spatial direction X, that is, the right direction with respect to the pixel of interest 51 in
More accurately, as described later, the histogram section 35 generates a histogram of the candidate motion vectors “pvec” of the pixel of interest 51 and pixels neighboring the pixel 51, and determines a motion vector “vec” of the pixel of interest 51 based on this histogram. For this purpose, the SAD calculating section 61 and the SAD-minimal-value detecting section 62 obtain not only the candidate motion vector “pvec” of the pixel of interest 51 but also the candidate motion vector “pvec” of each of the pixels constituting an area composed of a predetermined number of pixels arranged in a row in the direction X, where the center pixel is the pixel of interest 51 (e.g., an area 95 shown in
Assume that, as shown in
In
As shown in
If, as described above, there are two or more possible (candidate) counterpart pixels (counterpart points) for the center pixel 71 in the window 72 (i.e., if the difference between the correlation values SAD(j) at two or more minimal points of SAD(j) expressed in the form of a curve as shown in
In addition, even if there is only one possible (candidate) counterpart pixel (counterpart point) for the center pixel 71 of the window 72 (i.e., if SAD(j) expressed in the form of a curve has only one minimal point, as shown in
In order to evaluate such a minimal value “min”, that is, the confidence level of the counterpart pixel being the pixel at the minimal value “min” (the confidence level of the candidate motion vector “pvec” of the pixel of interest 51 supplied to the histogram section 35), the template matching section 34 according to this embodiment has a structure as shown in
The SAD-minimal-value evaluating section 63 evaluates the confidence level of the pixel at the minimal value “min”, being the counterpart pixel (confidence level of the candidate motion vector “pvec” of the pixel of interest 51 supplied to the histogram section 35). If the SAD-minimal-value evaluating section 63 determines that the confidence level is low, it supplies, for example, “1” as the above-described control signal “flag” to the motion-vector correcting section 36. In this case, as described later, the motion-vector correcting section 36 corrects the motion vector vec of the pixel of interest 51.
In contrast, if the SAD-minimal-value evaluating section 63 determines that the above-described confidence level is high, it supplies, for example, “0” as the above-described control signal “flag” to the motion-vector correcting section 36. In this case, as described later, the motion-vector correcting section 36 cancels (does not perform) the correction of the motion vector vec of the pixel of interest 51.
More specifically, according to this embodiment, for example, a value required by the SAD-minimal-value evaluating section 63 to carry out the above-described evaluation is calculated or detected by the SAD-minimal-value detecting section 62 and is supplied to the SAD-minimal-value evaluating section 36.
More specifically, according to this embodiment, values detected by the SAD-minimal-value detecting section 62 include not only the above-described minimal value “min”, that is, the minimum value “min” of the correlation values SAD(j) (hereinafter, referred to as the minimal value “min1” to discriminate from the value min2 to be described below), but also, as shown in
Furthermore, according to this embodiment, values calculated by the SAD-minimal-value detecting section 62 include the difference value “eval1” between the average SADave of the correlation values SAD(j) in the search range (from the search locations i−6 to i+6 in the example of
In this case (i.e., if the search range is the search locations from i−6 to i+6), the average SADave is calculated based on Equation (4) shown below.
Referring back to
The first condition is to satisfy all of Inequalities (5) to (7) shown below.
eval1>eval_thresh (5)
eval2>eval_thresh (6)
|pos1−pos2|>pos—thresh (7)
In Inequalities (5) and (6), eval_thresh represents a predetermined threshold, which is preset. Likewise, pos_thresh in Inequality (7) represents a predetermined threshold, which is also preset.
As is apparent from Inequalities (5) to (7), the first condition is satisfied if two or more minimal points exist and the difference between the correlation values S(j) at every pair of these minimal points is small, as shown in
On the other hand, the second condition is to satisfy Inequality (8) shown below.
min1>min_thresh (8)
In inequality (8), min_thresh represents a predetermined threshold, which is a preset value.
As is apparent from Inequality (8), the second condition is satisfied if the minimal value “min1” is not sufficiently small. In other words, the second condition is satisfied if the association of the window 72 with the comparison area 73−n for the minimal value “min1” is not established confidently.
Therefore, if it is determined that at least one of the first and second conditions is satisfied, the SAD-minimal-value evaluating section 63 determines that the confidence level of the pixel at the minimal value “min1”, being the counterpart pixel (confidence level of the candidate motion vector “pvec” of the pixel of interest 51 supplied to the histogram section 35), is low and outputs “1” as the control signal “flag” to the motion-vector correcting section 36.
In contrast, if it is determined that neither the first nor second condition is satisfied, the SAD-minimal-value evaluating section 63 determines that the confidence level of the pixel at the minimal value “min1”, being the counterpart pixel (confidence level of the-candidate motion vector “pvec” of the pixel of interest 51 supplied to the histogram section 35), is high and outputs “0” as the control signal “flag” to the motion-vector correcting section 36.
Referring back to
An area for which a histogram is generated is not limited to a particular area, as long as the area includes the pixel of interest 51. In other words, the number of pixels constituting the area or the location of the pixel of interest 51 in the area is not limited. In this example, it is presumed that an area for which a histogram is generated is an area composed of 17 pixels arranged in a row in the spatial direction X, where the pixel of interest 51 is the center pixel of the arranged pixels. In other words, the area is composed of the pixel of interest 51 at the center, eight pixels at the left (in the direction opposite to the spatial direction X) of the pixel of interest 51, and eight pixels arranged at the right (in the spatial direction X) of the pixel of interest 51.
More specifically, assume that the template matching section 34 outputs the vector candidate pvec of each of the pixels constituting the area 95 shown in
In this case, the histogram section 35 generates a histogram as shown in
The motion-vector correcting section 36 appropriately corrects the motion vector vec of the pixel of interest 51 supplied by the histogram section 35 such that the motion vector vec of the pixel of interest 51 matches the motion vectors of other pixels neighboring the pixel of interest 51 based on the downstream processing (e.g., above-described processing by the image processing section 12 in
The motion vector vec of the pixel of interest 51 corrected by the motion-vector correcting section 36 is output externally via the LPF 37.
Details of this motion-vector correcting section 36 will now be described with reference to FIGS. 13 to 19.
A wide variety of motion-vector correcting sections 36 are conceivable, such as those shown in
Referring to
The downstream-processing correcting section 101 corrects the motion vector vec of the pixel of interest 51 according to the characteristics of the downstream processing, for example, the processing by the above-described image processing section 12 shown in
Hereinafter, the motion vector vec before correction, the motion vector vec corrected by the downstream-processing correcting section 101, and the motion vector vec corrected by the confidence-level correcting section 104, to be described later, are referred to as the motion vector vec, the motion vector vec′, and the motion vector vec″, respectively, if they need to be discriminated from one another. In contrast, if it is not necessary to discriminate among the motion vector vec, the motion vector vec′, and the motion vector vec″, they are referred to as the motion vector vec.
The downstream-processing correcting section 101 outputs the corrected motion vector vec′, which is then supplied to the switching section 102.
The correction method employed by the downstream-processing correcting section 101 is not limited to a particular method. Instead, the downstream-processing correcting section 101 can employ a correction method appropriate for the characteristics of downstream processing.
A correction method appropriate for the processing by the correcting section 22 in
As described above, the correcting section 22 performs correction (including 0 correction) of the pixel value of the pixel of interest 51 supplied by the step-edge detecting section 21 in
From the viewpoint of sections downstream of the motion detecting section 14, for example, the image processing section 12, it is not necessary to discriminate the motion vector vec, the motion vector vec′, and the motion vector vec″ from one another. Therefore, hereinafter, the expression “motion vector vec” is used in the description from the viewpoint of sections downstream of the motion detecting section 14, i.e., the description of processing downstream of the motion detecting section 14.
Assume that the correcting section 22 does not use the correction value R, which is the result of calculation based on the above-described Equation (1), but relies on the relationship shown in
According to the example of the correction method (processing by the correcting section 22) depicted in
Therefore, if the pixel of interest 51 and its neighboring pixels with the motion vectors vec having magnitudes of about 6 are output from the motion detecting section 14, some of the output pixels are corrected with a large amount of correction, while other pixels are not corrected at all.
More specifically, assume that the pixel of interest 51 with the motion vector vec of “+6” is output from the motion detecting section 14 and that a neighboring pixel with the motion vector vec of “+7” is also output from the motion detecting section 14.
In this case, the pixel value of the pixel of interest 51 only is corrected with the maximum amount of correction, while the neighboring pixel is not corrected. This causes the pixel value (corrected value) of the pixel of interest 51 to mismatch the pixel value (value not corrected) of the neighboring pixel on the resultant correction image (target display frame). In short, the above-described second problem results.
In other words, the motion vector vec “+6” of the pixel of interest 51 is corrected to mismatch the motion vector vec “+7” of the neighboring pixel. That is, the motion vector vec “+6” of the pixel of interest 51 is corrected to a motion vector that does not match its neighbors.
In order to prevent the motion detecting section 14 from outputting motion vectors vec that does not match motion vectors of the neighboring pixels, that is, in order to prevent the amount of correction for pixel values from changing suddenly at the limits of the search range for the motion vector vec in the correcting section 22, the downstream-processing correcting section 101 in
According to the correction method in
If the downstream-processing correcting section 101 outputs the motion vector vec′ according to the relationship shown in
If the downstream-processing correcting section 101 in
As described above, the method shown in
What is important, the correction method employed by the downstream-processing correcting section 101 is not limited to a particular one, as long as it is suitable for the characteristics of the downstream processing. Thus, the correction method can be switched according to the characteristics of the downstream processing.
Assume that the relevant downstream processing (not shown) is characterized in that it is applied to motion vectors vec with magnitudes (absolute values) of an intermediate level (e.g., 3) from among the motion vectors vec of the pixel of interest 51 output by the motion detecting section 14 so as to enhance the effect by the motion vectors vec with magnitudes of the intermediate level.
In this case, the downstream-processing correcting section 101 can correct the motion vector vec into the motion vector vec′ according to the relationship shown in, for example,
According to the correction method in
If the downstream-processing correcting section 101 performs correction based on the relationship shown in
Furthermore, if two or more downstream processing operations might or does exist, the downstream-processing correcting section 101 can switch its correction method as required to correct the motion vector vec. More specifically, if Q (Q is one or a larger integer) downstream processing operations exist, the downstream-processing correcting section 101 can correct one motion vector vec of the pixel of interest 51 according to each of the Q correction methods, and output the Q resultant motion vectors vec′ individually.
Referring back to
The switching section 102 switches the output destination to one of the external LPF 37 and the confidence-level correcting section 104 based on the control of the confidence-level evaluating section 103.
The confidence-level evaluating section 103 evaluates the confidence level of the motion vector vec of the pixel of interest 51 supplied by the histogram section 35 based on the luminance gradient “slope” supplied by the luminance-gradient detecting section 33 and the control signal “flag” supplied by the template matching section 34.
If the confidence-level evaluating section 103 determines that the confidence level of the motion vector vec of the pixel of interest 51 is low, it switches the output destination of the switching section 102 to the confidence-level correcting section 104.
For this reason, if the confidence-level evaluating section 103 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 is low, the motion vector vec′ corrected and output by the downstream-processing correcting section 101 is supplied to the confidence-level correcting section 104. Thereafter, as described later, the motion vector vec′ corrected by the downstream-processing correcting section 101 is further corrected by the confidence-level correcting section 104 into the motion vector vec″, which is then supplied externally (image processing section 11, image processing section 12, and switching section 15 in
In contrast, if the confidence-level evaluating section 103 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 is high, it switches the output destination of the switching section 102 to the external LPF 37.
Thus, if the confidence-level evaluating section 103 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 is high, the motion vector vec′ corrected and output by the downstream-processing correcting section 101 is not supplied to the confidence-level correcting section 104 but supplied externally from the motion detecting section 14 via the LPF 37.
In more detail, for example, if at least one of a third condition requiring that Inequality (9) shown below be established and a fourth condition requiring that the control signal “flag” be “1” is satisfied, the confidence-level evaluating section 103 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 supplied by the histogram section 35 is low, switching the output destination of the switching section 102 to the confidence-level correcting section 104.
In contrast, if neither of the third and fourth conditions is satisfied, the motion-vector correcting section 36 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 supplied by the histogram section 35 is high, switching the output destination of the switching section 102 to the external LPF 37.
slope<slope_thresh (9)
In Inequality (9), slope_thresh is a predetermined threshold, which is a preset value.
As is apparent from Inequality (9), if the luminance gradient at the pixel of interest 51 is small, i.e., if the pixel of interest 51 does not correspond to a characteristic portion, such as an edge portion of a step edge as described above, then the third condition is satisfied.
Furthermore, if the control signal “flag”, which, as described above, can be said to be a signal indicating the result of evaluation about the confidence level of the motion vector vec of the pixel of interest 51 in the template matching section 34, is “1”, i.e., if the template matching section 34 evaluates that the confidence level of the motion vector vec of the pixel of interest 51 is low, the fourth condition is satisfied.
If the output destination of the switching section 102 is switched to the confidence-level correcting section 104, the confidence-level correcting section 104 further corrects, according to a predetermined method, the motion vector vec′ output by the downstream-processing correcting section 101, i.e., the motion vector vec′, which is output as a result of the motion vector vec input by the histogram section 35 being corrected by the downstream-processing correcting section 101, and then outputs the resultant motion vector vec″ via the LPF 37 externally (image processing section 11, image processing section 12, and switching section 15 in
The correction method employed by the confidence-level correcting section 104 is not limited to a particular method.
The motion vector vec of the pixel of interest 51 is used by the above-described image processing section 12 in this example. In short, the image processing section 12 corrects the pixel value of the pixel of interest 51 (so as to enhance the image) according to the direction and the magnitude of the motion vector vec of the pixel of interest 51. In this case, if a motion vector vec of the pixel of interest 51 with a low confidence level is used, such a problem that the pixel value of the pixel of interest 51 is overcorrected occurs.
In order to overcome this problem, for example, the confidence-level correcting section 104 can apply correction based on the right-hand side of Equation (10) shown below to the motion vector vec′ obtained as a result of the motion vector vec of the pixel of interest 51 being corrected by the downstream-processing correcting section 101, and output externally from the motion detecting section 14 via the LPF 37 the correction result, i.e., the value vec″ corresponding to the left-hand side of Equation (10), as a final (corrected) motion vector of the pixel of interest 51.
vec″=α×vec′ (10)
In Equation (10), α is a correction coefficient. This correction coefficient α can be set to any value in the range of 0 to 1.
The confidence-level evaluating section 103 can use conditions other than the above-described third condition and the fourth condition to evaluate the confidence level of the motion vector vec of the pixel of interest 51 supplied by the histogram section 35.
More specifically, if, for example, the search location pos2 corresponding to the smallest correlation value SAD(j) second to the minimal value “min1”, i.e., the minimal value “min2”, is the location i of the pixel of interest 51 as shown in
For this reason, a fifth condition requiring that, for example, the search location pos2 corresponding to the minimal value “min2”, be the location i of the pixel of interest 51 can be added, so that if the fifth condition is satisfied, the confidence-level evaluating section 103 in
Although not shown, if the fifth condition is satisfied, the confidence-level evaluating section 103 informs the confidence-level correcting section 104 that the fifth condition is satisfied, so that the confidence-level correcting section 104 regards the counterpart pixel (counterpart point) for the center pixel 71 of the window 72 as the pixel of interest 51 at the location i (search location pos2), i.e., interprets that there is no motion at the location i of the pixel of interest 51, and changes the motion vector vec′ of the pixel of interest 51 corrected by the downstream-processing correcting section 101 to “0”. In other words, the confidence-level correcting section 104 sets the correction coefficient α in the above-described Equation (10) to 0 to correct the motion vector vec′ of the pixel of interest 51 corrected by the downstream-processing correcting section 101. More specifically, if the fifth condition is satisfied, the confidence-level correcting section 104 corrects the motion vector vec′ of the pixel of interest 51 corrected by the downstream-processing correcting section 101 into 0 vector.
In this case, however, the template matching section 34 (SAD-minimal-value evaluating section 63 in
Alternatively, if the fifth condition is satisfied, the SAD-minimal-value evaluating section 63 may always output “1” as the control signal “flag”.
To explain possible variations of the motion-vector correcting sections 36 in
The motion-vector correcting section 36 in
The motion-vector correcting section 36B in
On the other hand, the motion-vector correcting section 36C in
The example structure of the motion detecting section 14 in
The motion detecting section 14 according to this embodiment sets a predetermined pixel from among the pixels constituting the target display frame as a pixel of interest. The motion detecting section 14 generates a motion vector at the pixel of interest, corrects the motion vector based on the confidence level of the motion vector and the characteristics of downstream processing, and outputs the motion vector externally (the image processing section 11, the image processing section 12, and the switching section 15 in
In more detail, the template matching section 34 compares the target display frame (image data) with the previous frame (image data) to generate the candidate motion vector “pvec” at the pixel of interest and supplies it to the histogram section 35.
In other words, the template matching section 34 detects a first pixel on the target display frame (the first pixel is, for example, a pixel from among the pixels in the search range i−6 to i+6 in
Furthermore, the template matching section 34 supplies first information indicating a command for correcting the motion vector (i.e., the control signal “flag” of “1” in this example) to the motion-vector correcting section 36, if it is determined that the target display frame contains two or more candidate counterpart pixels (e.g., if the pixel at the search location i−1 and the pixel at the search location i+5 are candidate counterpart pixels as shown in
The histogram section 35 determine as the motion vector vec at the pixel of interest the candidate motion vector with the highest frequency (e.g., “+4” having the highest frequency in the histogram shown in
The luminance-gradient detecting section 33 calculates the degree of change in luminance around the pixel of interest (e.g., above-described luminance gradient, more specifically, the value “slope” of Equation (2) shown above) and supplies it to the motion-vector correcting section 36.
The motion-vector correcting section 36 can correct the motion vector vec supplied by the histogram section 35 based on the characteristics of the processing by a downstream section (e.g., the correcting section 22 in
More specifically, the motion-vector correcting section 36 can evaluate the confidence level of the motion vector vec supplied by the histogram section 35 based on the processing result (luminance gradient “slope” in this example) from the luminance-gradient detecting section 33 and the processing result (control signal “flag” in this example) from the template matching section 34. Thereafter, if the motion-vector correcting section 36 evaluates that the confidence-level of the motion vector vec is low, more specifically, if, for example, at least one of the third condition requiring that Inequality (9) shown above be established and the fourth condition requiring that the control signal “flag” supplied by the template matching section 34 in
Alternatively, the motion-vector correcting section 36 can evaluate the confidence level of the motion vector vec supplied by the histogram section 35 based on the processing result from the luminance-gradient detecting section 33 and the processing result from the template matching section 34 without performing correction processing according to the downstream processing, and if the motion-vector correcting section 36 evaluates that the confidence level of the motion vector vec is low, the motion-vector correcting section 36 can correct the motion vector vec supplied by the histogram section 35 and output it externally (image processing section 11, image processing section 12, and switching section 15 in
In this manner, the motion detecting section 14 according to this embodiment can output the motion vector vec (strictly speaking, the corrected motion vector vec′ or motion vector vec″) of a pixel matching those of neighboring pixels. As a result, for example, the image processing section 12 in
Image processing by the image processing apparatus 1 (shown in
First in step S1, the image processing apparatus 1 inputs the image data of the target display frame. In more detail, the image data of the target display frame is input to the image processing section 11, the image processing section 12, the reference-image storing section 13, and the motion detecting section 14.
In step S2, the image processing apparatus 1 (the image processing section 11, the image processing section 12, and the motion detecting section 14, etc.) sets a pixel of interest from among the pixels constituting the target display frame.
In step S3, the motion detecting section 14 compares the image data of the target display frame with the image data of the reference image (previous frame) stored in the reference-image storing section 13 to calculate the motion vector vec of the pixel of interest, corrects the motion vector vec as required, and supplies it to the image processing section 11, the image processing section 12, and the switching section 15.
Hereinafter, the above-described processing (processing in step S3) by the motion detecting section 14 is referred to as “motion-vector calculation processing”. Details of the “motion-vector calculation processing” will be described later with reference to the flowchart in
In step S4, the image processing apparatus 1 (the image processing section 11, the image processing section 12, the switching section 15, etc.) determines whether the magnitude of the motion vector vec of the pixel of interest is equal to the threshold or more.
Strictly speaking, the processing in steps S4 to S7 uses the corrected motion vector vec′ or motion vector vec′ of the pixel of interest output as a result of the “motion-vector calculation processing”, which is the processing in step S3 by the motion detecting section 14. Since the motion vector vec′ and the motion vector vec″ need not be discriminated from each other for the description of the processing in steps S4 to S7, the motion vectors v′ and v″ are referred to just as the motion vector vec.
If it is determined in step S4 that the magnitude of the motion vector vec is below the threshold (the magnitude of the motion vector vec is not equal to or higher than the threshold), i.e., if the pixel of interest is not moving, then the switching section 15 switches its input to the image processing section 11. As a result, in step S5, the image processing section 11 applies predetermined image processing to the pixel of interest to correct the pixel value of the pixel of interest and supplies the corrected pixel value to the display control section 16 via the switching section 15.
In contrast, if it is determined in step S4 that the magnitude of the motion vector vec is equal to the threshold or more, i.e., if the pixel of interest is moving, the switching section 15 switches its input to the image processing section 12 (correcting section 22).
At this time, in step S6, the step-edge detecting section 21 calculates the difference value between the pixel value of the pixel of interest and the pixel value of the pixel neighboring to the pixel of interest in a predetermined direction (the spatial direction X or the opposite direction, whichever is determined according to the direction (positive or negative) of the motion vector vec supplied by the motion detecting section 24 in this example). The step-edge detecting section 21 then supplies the calculated difference value and the pixel value of the pixel of interest to the correcting section 22.
In step S7, the correcting section 22 corrects the pixel value of the pixel of interest supplied by the step-edge detecting section 21 based on the motion vector of the pixel of interest supplied by the motion detecting section 14 and the difference value supplied by the step-edge detecting section 21, and supplies the corrected pixel value to the display control section 16 via the switching section 15.
In step S8, the display control section 16 outputs to the hold-type display apparatus 2 the pixel value of the pixel of interest supplied via the switching section 15 by the image processing section 11 or the image processing section 12 (by converting the pixel value into a signal corresponding to the hold-type display apparatus 2, as required). In other words, the display control section 16 outputs to the hold-type display apparatus 2 the pixel value of the pixel of interest as the target level for the display element corresponding to the pixel of interest from among the display elements of the hold-type display apparatus 2.
In step S9, the image processing apparatus 1 determines whether the pixel values of all pixels have been output.
If it is determined in step S9 that the pixel values of some pixels have not been output, the flow returns to step S2 to repeat the subsequent processing. More specifically, one of unprocessed pixels from among the pixels constituting the target display frame is set as a pixel of interest, and the pixel value of the new pixel of interest is corrected (including 0 correction) and output to the hold-type display apparatus 2.
The above-described processing is repeated until the pixel values of all pixels constituting the target display frame are passed to the hold-type display apparatus 2. If it is determined in step S9 that the pixel values of all pixels have been output, the flow proceeds to step S10.
At this time, the hold-type display apparatus 2 applies voltages of the levels corresponding to the supplied pixel values (target levels) to the display elements (e.g., liquid crystal) constituting the screen, and maintains the levels of voltage until the display of the subsequent frame is specified (until the pixel values of all pixels constituting the subsequent frame are supplied). In short, each of the display elements displays and holds the corresponding pixel.
In step S10, the image processing apparatus 1 determines whether all frames constituting the moving image have been processed.
If it is determined in step S10 that some frames have not been processed, the flow returns to step S1, where image data of the subsequent frame is input as the image data of the target display frame and the same processing is repeated.
Finally, if the pixel values of all pixels constituting the final frame from among the frames constituting the moving image are corrected (including 0 correction) and output to the hold-type display apparatus 2, it is determined in step S10 that all frames have been processed, and the image processing by the image processing apparatus 1 ends.
Although, in the example of
The “motion-vector calculation processing” (processing in step S3 of
First in step S21, the luminance-gradient detecting section 33 detects the luminance gradient “slope” around the pixel of interest and supplies it to the motion-vector correcting section 36.
In step S22, the template matching section 34 calculates the candidate motion vector “pvec” at each of pixels including the pixel of interest (e.g., the pixels constituting the area 95 in
Furthermore, in step S23 the template matching section 34 generates the control signal “flag” indicating whether the motion vector vec of the pixel of interest should be corrected, and supplies it to the motion-vector correcting section 36.
In step S24, the histogram section 35 generates a histogram of the candidate motion vectors “pvec” of the pixels including the pixel of interest (e.g., the histogram in
In step S25, the downstream-processing correcting section 101 of the motion-vector correcting section 36A in
If the motion-vector correcting section 36 in
On the other hand, if the motion-vector correcting section 36 in
As described above, the operation of the “motion-vector calculation processing” slightly differs depending on the structure of the motion-vector correcting section 36 in
When the above-described processing in step S25 is completed, the flow proceeds to step S26.
In step S26, the confidence-level evaluating section 103 in
In step S27, the confidence-level evaluating section 103 determines whether the confidence level of the motion vector vec is low based on the result of the processing in the step S26.
If the confidence-level evaluating section 103 determines in step S27 that the confidence level of the motion vector vec is high (not low), it switches the output destination of the switching section 102 to the external LPF 37, and the flow proceeds to step S29.
In step S29, the switching section 102 outputs the motion vector vec′ corrected by the downstream-processing correcting section 101 by the first correction method through the processing in step S25 externally (image processing apparatus 11, image processing apparatus 12, and switching section 15 in
In contrast, if the confidence-level evaluating section 103 determines in step S27 that the confidence level of the motion vector vec is low, it switches the output destination of the switching section 102 to the confidence-level correcting section 104, and the flow proceeds to step S28.
In this case, the motion vector vec′ corrected by the downstream-processing correcting section 101 by the first correction method through the processing in step S25 is supplied to the confidence-level correcting section 104. In step S28, the confidence-level correcting section 104 further corrects the motion vector vec′ of the pixel of interest (corrected by the downstream-processing correcting section 101) by a second correction method and in step S29 outputs the corrected motion vector vec″ externally from the motion detecting section 14 via the LPF 37.
If the motion vector vec′ or motion vector vec″ of the pixel of interest corrected through the processing in step S29 is output in this manner, the “motion-vector calculation processing” ends.
The image processing apparatus to which the present invention is applied is not limited to the structure shown in
The image processing apparatus to which the present invention is applied can be realized in the form of the structure shown in, for example,
An image processing apparatus 151 according to this embodiment has basically the same structure and function as the structure and function of the image processing apparatus 1 in
In the image processing apparatus 1 in
Because of the above-described structure, the image processing apparatus 151 operates as described below.
In the image processing apparatus 151, the step-edge detecting section 21 detects the pixels corresponding to a step edge from among the pixels constituting a predetermined frame, and supplies the detection result to the motion detecting section 14 and the image processing section 11, as well as the correcting section 22.
As a result, the motion detecting section 14 can apply its processing only to the pixels (pixels corresponding to the step edge) detected by the step-edge detecting section 21. In other words, the motion detecting section 14 detects whether the step edge detected by the step-edge detecting section 21 is moving.
Furthermore, from among the pixels (pixels corresponding to the step edge) detected by the step-edge detecting section 21, the image processing section 11 does not apply its processing to pixels found moving by the motion detecting section. More specifically, the image processing section 11 does not apply its processing to pixels corresponding to a moving step edge, whereas it applies its processing to pixels other than those corresponding to a moving step edge.
As described above, in the image processing apparatus 151 in
The above-described sequence of processing can be carried out not only with hardware but also with software.
If software is used to carry out the above-described sequence of processing, the image processing apparatus 1 in
In
The CPU 201, the ROM 202, and the RAM 203 are interconnected via a bus 204. An input/output interface 205 is also connected to the bus 204.
An input unit 206 including, for example, a keyboard and a mouse; an output unit 207 including, for example, an LCD; the storage unit 208 including, for example, a hard disk; and a communicating unit 209 including, for example, a modem and a terminal adapter are connected to the input/output interface 205. The communicating unit 209 carries out communication with other information processing apparatuses (not shown) via a network including the Internet.
In this case, the output unit 207 itself may be a hold-type display apparatus or an external hold-type display apparatus 2 (in
A drive 210 is connected to the input/output interface 205, as required. A removable medium 211 including, for example, a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory is mounted to the drive 210 so that computer programs are read from the drive 210 and stored in the storage unit 208.
If the sequence of processing is to be implemented using software, a program constituting the software is installed from a network or recording medium to a computer built into dedicated hardware or to, for example, a general-purpose personal computer that requires programs to be installed to carry out the corresponding functions.
As shown in
In the present invention, the steps of programs recorded on the recording medium may or may not be followed time-sequentially in order of described steps. Furthermore, the steps may be followed in parallel or independently from one another.
Although a predetermined one of the pixels constituting a frame corresponds to one of the display elements (liquid crystal for LCD apparatuses) constituting the screen of the hold-type display apparatus 2 in the above-described embodiments, one pixel may corresponds to a plurality of display elements. In other words, a plurality of display elements may display one pixel.
Furthermore, although the image processing apparatus generates a motion vector parallel to the spatial direction X in the above-described embodiments, the image processing apparatus can generate a motion vector parallel to the spatial direction Y or a motion vector in any direction on the two-dimensional plane parallel to the spatial direction X and the spatial direction Y through basically the same processing as the above-described series of processing.
Number | Date | Country | Kind |
---|---|---|---|
2004-124271 | Apr 2004 | JP | national |