The present disclosure relates to an image processor, an image processing method, and a program. More particularly, the present disclosure relates to an image processor which controls an intensity at which image processing is executed, an image processing method used in the same, and a program used in the same.
There is known an apparatus which executes super-resolution processing for increasing an amount of information on an input image which is displayed at a low resolution, thereby converting the resulting input image into a high-definition image which is displayed at a high resolution. In addition, there is also known an apparatus which executes enhancing processing for subjecting an input image supplied thereto to interpolation processing for further increasing a resolution, and filter processing for emphasizing an edge, thereby converting the resulting input image into an enhanced image which is to be displayed at a higher resolution.
A technique is proposed in which when such super-resolution processing or the enhancing processing is executed, for the purpose of improving an S/N (signal/noise) ratio, and a depth feel and a stereoscopic effect, flatness detection is carried out in order to suppress a noise in a flat portion, thereby controlling a super-resolution and enhancement strength of the flat portion. In addition, Japanese Patent Laid-Open Nos. 2010-72982 and 2009-251839 propose a technique as well with which a super-resolution and enhancement strength are controlled in accordance with depth information calculated based on depth detection.
Just by the flatness detection, it is difficult to carry out the suppression in the case where there is a nose which is too much to remove away in the processing for the noise reduction, or there is a noise having a strong amplitude. When such a noise is tried to be removed away (suppressed), up to a texture and an edge which are not desired to be suppressed are suppressed, and as a result, it is difficult to execute the suitable noise suppressing processing.
When the depth detection/suppression processing or the like is executed in accordance with the frequency information within the picture, such processing comes to be local processing. As a result, it may be difficult to take in composition information of the picture at large, and thus there is the possibility that the depth feel and the stereoscopic effect are impaired.
Thus, it has been desired that there are executed the image processing for the suitable noise resolution, and the image processing for which the depth feel and the stereoscopic effect are not impaired.
The present disclosure has been made in order to solve the problem described above, and it is therefore desirable to provide an image processing which is capable of executing suitable image processing, an image processing method used in the same, and a program used in the same.
In order to attain the desire described above, firstly, according to an embodiment of the present disclosure, there is provided an image processor including: a detecting portion detecting a composition of an input image; a first generating portion generating first information in accordance with an intensity of image processing based on the composition detected by the detecting portion is controlled; a second generating portion detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled; a third generating portion detecting a foreground portion of the input image, and generating third information in accordance with which an intensity of image processing for the foreground portion is controlled; and an image processing portion executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
Secondly, preferably, the image processor may further include: a fourth generating portion synthesizing the first information and the second information, thereby generating fourth information; and a fifth generating portion synthesizing the third information and the fourth information, thereby generating fifth information, in which the image processing portion executes the image processing in accordance with an intensity based on the fifth information.
Thirdly, preferably, the image processor may further include: a fourth generating portion synthesizing the first information and the second information by obtaining minimum values of the first information and the second information, thereby generating fourth information; and a fifth generating portion synthesizing the third information and the fourth information by obtaining maximum values of the third information and the fourth information, thereby generating fifth information, in which the image processing portion executes the image processing in accordance with an intensity based on the fifth information.
Fourthly, preferably, the first generating portion may set a maximum value and a minimum value of the intensity of the image processing controlled in accordance with the first information based on a reliability of the composition detected by the detecting portion, and generating the first information falling within a range of the intensity thus set.
Fifthly, preferably, when the input image is divided into parts based on the composition detected by the detecting portion, the first generating portion may detect a line becoming a boundary of the division and may generate the first information in accordance with which the intensity is steeply changed with the line as the boundary.
Sixthly, preferably, the image processing which the image processing portion executes may be at least one piece of processing of super-resolution processing, enhancing processing, noise reducing processing, S/N ratio improving processing, and depth feel and stereoscopic effect improving processing.
Seventhly, according to the embodiment of the present disclosure, there is provided an image processing method for an image processor executing image processing for an input image, the method including: detecting a composition of an input image; generating first information in accordance with an intensity of image processing based on the composition thus detected is controlled; detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled; detecting a foreground portion of the input image, and generating third information in accordance with an intensity of image processing for the foreground portion is controlled; and executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
Eighthly, according to the embodiment of the present disclosure, there is provided a program in accordance with which a computer controlling an image processor subjecting an input image to image processing is caused to execute processing including: detecting a composition of an input image; generating first information in accordance with an intensity of image processing based on the composition thus detected is controlled; detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled; detecting a foreground portion of the input image, and generating third information in accordance with which an intensity of image processing for the foreground portion is controlled; and executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
In the image processor, the image processing method, and the program according to the embodiment described above of the present disclosure, with the computer for controlling the image processor for subjecting the input image to the image processing, the composition of the input image is detected. Also, the first information in accordance with which the intensity of the image processing based on the composition thus detected is controlled, the second information in accordance with which the intensity of the image processing for the flat portion is controlled, and the third information in accordance with which the intensity of the image processing for the foreground portion is controlled are respectively generated. Also, the predetermined pieces of image processing are executed in accordance at the intensity based on the first information, the second information, and the third information.
As set forth hereinabove, according to the present disclosure, the suitable image processing is enabled to be executed. The image processing includes the super-resolution processing, the enhancing processing, the noise reducing processing, the S/N ratio improving processing, and the depth feel and the stereoscopic effect improving processing. Thus, it becomes possible to suitably execute these pieces of image processing.
Hereinafter, an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings. It is noted that the description will be given below in accordance with the following order.
1. Configuration of Image Processor
2. Composition Masks
3. Flat Mask, Foreground Mask, Background Mask, and Composition Adaptive Mask
4. Operation of Image Processor
5. Recording Media
The image processor 11 shown in
The image processor 11 is configured in such a way that the image processing portion 27 executes predetermined pieces of processing by using masks which are generated from an input image by the composition detecting portion 21, the composition mask generating portion 22, the background mask generating potion 23, the flat mask generating portion 24, the foreground mask generating portion 25, and the composition adaptive mask generating portion 26.
An input signal which has been inputted to the image processor 11 is supplied to each of the composition detecting portion 21, the flat mask generating portion 24, the foreground mask generating portion 25, and the image processing portion 27. The composition detecting portion 21 detects a composition of an image corresponding to the input signal, and supplies a detection result about the composition to the composition mask generating portion 22. The composition mask generating portion 22 generates a composition mask based on the detection result, about the composition, supplied thereto.
A description will be given below with respect to the compositions detected by the composition detecting portion 21 with reference to
It is noted that an image such that the lower side of a scenery or the like is the long-distance view, and the upper side thereof is the short-distance view is also detected as having the top/bottom composition. The top/bottom composition, as shown on a lower stage of FIG. 2A, is processed as a composition which is adapted to be divided into a picture upper side portion 52 and a picture lower side portion 53. Being processed as the top/bottom composition means that a mask of the top/bottom composition is generated in the composition mask generating portion 22 in the subsequent stage. The mask will be described later with reference to
When an image 61 as shown in an upper stage of
It is noted that an image such that the left-hand side of a scenery or the like is the short-distance view, and the right-hand side thereof is the long-distance view is also detected as having the right/left composition. The right/left composition, as shown on the lower side of
When an image 71 as shown in an upper stage of
It is noted that the vicinity of the center of a scenery or the like is the long-distance view, and each of right- and left-hand sides thereof is the short-distance view is also detected as having the middle/side composition. As shown on the lower side of
In such a manner, the composition detecting portion 21 detects the composition of the input image. Although the detection of the composition can be carried out by separating the image into the long-distance view and the short-distance view, the composition may also be detected by using any other suitable method. In addition, although in the description given with reference to
Each of the composition masks shown in
In addition, when any of the composition masks shown in
As described above, the composition masks shown in
In addition, the intensity of the image processing such as the super-resolution and the enhancement contains two meanings: the intensity itself in the phase of the image processing such as the super-resolution and the enhancement; and the intensity of the suppression of the super-resolution and the enhancement. However, in this case, the description will be mainly, continuously given on the assumption that the intensity of the image processing such as the super-resolution and the enhancement is the intensity itself in the phase of the image processing such as the super-resolution and the enhancement.
A composition mask 101 used for the top/bottom composition and shown in
The degree of the change in the intensity of each of the composition masks 101 to 103 is set depending on the input image. This respect will be described below with reference to
A composition mask 101-1 shown in
A composition mask 101-2 shown in
It is noted that although in this case, the description is continuously given on the assumption that the intensity is steeply changed with the line as the boundary, the intensity is set in such a way that the intensity is steeply changed within areas which have some degrees of widths before and after that line, respectively, and thus it is not meant that the entirely different intensity is set with the line as the boundary.
Although the composition mask 101-2 and the composition mask 101-3 are the composition masks in which the intensities are steeply changed with the line L1 and the like L2 as the boundaries, respectively, the position of the line L1 and the like L2 are different from each other. The line L1 and the like L2 are lines in the case of the image such that the image made the object of the processing is adapted to be clearly separated into the picture upper side portion 52 and the picture lower side portion 53, and are also lines in the case where the boundary line between the picture upper side portion 52 and the picture lower side portion 53 is detected.
That is to say, the composition mask 101-2 shown in
As described above, in the composition detecting portion 21 (refer to
In addition, in the case where the image made the object of the processing is detected as having the right/left composition, there is detected the line in the vertical direction with which the picture left-hand side portion 62 and the picture right-hand side portion 63 are separated from each other. Also, in the case where the image made the object of the processing is detected as having the middle/side composition, there are detected two lines: a line in the vertical direction with which the picture outside portion 72 and the picture middle portion 73 are separated from each other; and a line in the vertical direction with which the picture middle portion 73 and the picture outside portion 74 are separated from each other. In such a manner, the lines suitable for the respective compositions are detected.
However, the degree of the change in the intensity within each of the masks is changed depending on the reliabilities. A description will be given below with respect to composition masks in each of which the degree of the change in the intensity is changed depending on the reliabilities with reference to
When the reliability is high, like a composition mask 101D shown on the right-hand side of
On the other hand, when the reliability is low, there is no change in the intensity based on the composition and the line which have been detected. In this case, the mask 101A is made a composition mask in which the difference between the maximum intensity and the minimum intensity is set to zero with respect to the intensity. That is to say, if the reliability is low, then there is applied a composition mask such that the influence by the composition mask is not reflected in the phase of the image processing in the subsequent stage.
It is noted that when no composition is detected for the image, since the reliability is processed as being low, the composition mask 101A shown in
A composition mask 101B and a composition mask 101C which are shown in the center of
In addition, for example, when the reliability is 40, the composition mask 101B is used and as a result, there is used the mask in which the different between the maximum intensity and the minimum intensity is set to 40. Also, when the reliability is 80, the composition mask 101C is used and as a result, there is used the mask in which the different between the maximum intensity and the minimum intensity is set to 80.
In such a manner, the maximum value and the minimum value of the intensity within the composition mask may be set depending on the reliabilities. That is to say, in this case, the maximum intensity is set constant, and the minimum value is set in such a way that the minimum intensity comes close to the maximum intensity as the composition reliability becomes lower. In such a case, for example, a procedure may also be adopted such that the composition mask is generated in which the minimum value of the intensity is set, and the intensity is changed within the range of the intensity between the minimum value thus set, and the maximum value set as the constant value. It is noted that such setting of the intensity based on the reliability is merely an example, and thus the intensity may also be set based on the reliability in relationship to any other suitable factor.
The reliability, for example, is obtained from a difference between flatness degrees, bands, amplitudes, or two pieces of color information which are obtained through two-division. The composition reliability is set high for the image in which the composition estimation can be clearly carried out because the difference is large. On the other hand, the composition reliability is set low for the image in which the composition estimation is not reliable because the difference is small. The composition reliability can be set with a numerical value of 0 to 100%.
As a special case, when all it takes is that although the difference is large, the reliability is not made high, the composition reliability is made low. For example, when it is decided that although the difference is large in the top/bottom comparison, the image is an image in which there are many textures and edges in the upper side portion as well, the composition reliability is set low.
In addition, as a special case, when all it takes is that although the difference is small, the reliability is made high, the composition reliability is made high. For example, when it is decided that although the difference is small in the top/bottom composition, the composition is a scenery composition such that there is the blue sky in the upper side portion, the composition reliability is set high.
It is noted that how to set the composition reliability is merely an example, and thus the composition reliability may also be set by utilizing any other suitable setting method.
As described above, the composition is detected in the composition detecting portion 21, and there is generated the composition mask corresponding to the composition detected by the composition mask generating portion 22.
Next, a description will now be given with respect to a flat mask generated in the flat mask generating portion 24 (refer to
As shown in
The flat mask generating portion 24 generates the flat mask 201 for suppressing the noise in a flat portion in the phase of the image processing, for example, when the noise reducing processing is executed. The flat portion is an empty which, for example, is present in the upper portion in the flat mask 201 in
The foreground mask portion 25 generates the foreground mask 401 for emphasizing an edge and a texture. An edge portion, for example, a boundary portion between a building and the sky in the foreground mask 401 shown in
The flat mask generating portion 24 and the foreground mask portion 25 generate either the flat mask 201 or the foreground mask 401 based on the flatness degree, the band, the amplitude, the color information, and the like.
The background generating portion 23 synthesizes the composition mask 101 generated in the composition mask generating portion 22, and the flat mask 201 generated in the flat mask generating portion 24, thereby generating the background mask 301. The synthesis, for example, is carried out by obtaining a minimum values (Min) of the composition mask 101 and the flat mask 201. It is noted that although the description is continuously given on the assumption that the synthesis in this case is realized by obtaining the minimum values (Min), and superimposing the minimum values (Min) on each other in terms of a layer, the synthesis may also be realized based either on a weighted average or on a weighted addition of the two masks.
The background mask 301 thus generated is such a mask as not to emphasize a blurred portion, but as to emphasize the near side, thereby enhancing the depth feel and the stereoscopic effect.
The composition adaptive mask generating portion 26 synthesizes the background mask 301 generated in the background mask generating portion 23, and the foreground mask 401 generated in the foreground mask generating portion 25, thereby generating a composition adaptive mask 501. The synthesis, for example, is carried out by obtaining maximum values (Max) of the background mask 301 and the foreground mask 401. It is noted that although the description is continuously given on the assumption that the synthesis in this case is realized by obtaining the maximum values (Max), and superimposing the maximum values (Max) on each other in terms of a layer, the synthesis may also be realized based either on the weighted average or on the weighted addition of the two masks.
The composition adaptive mask 501 generated in the composition adaptive mask generating portion 26 is supplied to the image processing portion 27.
Note that, in this case, the description has been given on the assumption that the composition mask 101 and the flat mask 201 are synthesized to generate the background mask 301, and the background mask 301 thus generated and the foreground mask 401 are further synthesized to generate the composition adaptive mask 501. However, a combination of the masks to be synthesized, the order of the synthesis, how to carry out the synthesis, and the like are by no means limited to these combination, the order, and how to carry out the synthesis.
For example, a procedure may also be adopted in which the maximum values (Max) of the composition mask 101 and the foreground mask 401 are obtained, thereby carrying out the synthesis, and the final mask corresponding to the composition adaptive mask 501 is generated by executing processing for subtracting the flat mask 201 from the synthetic mask. In addition, for example, a procedure may also be adopted in which the minimum values (Min) of the flat mask 201 and the foreground mask 401 are obtained, thereby carrying out the synthesis, and the final mask corresponding to the composition adaptive mask 501 is generated by obtaining an average between the composition mask 101 and the synthetic mask.
In the manner as described above, plural sheets of masks are synthesized to generate one sheet of mask, that is, the composition adaptive mask 501 in this case, whereby the mask having the good advantages of the plural sheets of masks. Also, it becomes possible to use such a mask in the image processing.
In the image processing executed in the image processing portion 27, the strength-control for each pixel for the super-resolution processing, and the enhancing processing is carried out by using the composition adaptive mask 501 to which the composition information is adopted. Also, there is executed the processing aiming at improving the S/N ratio, and the depth feel and the stereoscopic effect.
In addition, the object of the control can also be set to the processing which will be described below.
The image processing which is effective in the S/N ratio improvement, and the depth feel and the stereoscopic effect improvement can be executed by changing the intensity, similarly to the case of the intensity control for each pixel for the super-resolution and the enhancement, for:
(i) processing for improving an S/N ratio based on the intensity setting for noise reduction (NR),
(ii) processing for improving the depth feel and the stereoscopic effect based on contrast processing, and
(iii) processing for improving the depth feel and the stereoscopic effect based on color correction (color difference, saturation).
The information in accordance with which when such image processing is executed, for which of the pixels at how large intensity the processing should be executed is controlled is the composition adaptive mask 501. Therefore, the composition mask 101, the flat mask 201, the foreground mask 301, the background mask 401, and the composition adaptive mask 501 are respectively the five pieces of information in accordance with the intensities for the image processing are controlled.
Therefore, even if the image processing is executed by using one sheet of mask, since the mask itself is the information in accordance with which the intensity for the image processing is controlled, the image processing can be executed at the intensity corresponding to the mask concerned. The mask in the image processor 11 of the embodiment is the information in accordance with which the intensity for the image processing is controlled in such a manner. Thus, even when a form is not the form of the mask, the technique of the present disclosure can be applied thereto as long as the form other than the mask corresponds to the information in accordance with which the intensity for the image processing is controlled.
An operation of the image processor 11 shown in
In processing in step S11, composition mask generating processing is executed. The composition mask generating processing is executed by both of the composition detecting portion 21 and the composition mask generating portion 22 in the manner as will be described with reference to the flow chart of
In processing in step S12, the flat mask 201 is generated. The flat mask 201 is generated by the flat mask generating portion 24. As has been described with reference to
In processing in step S13, the foreground mask 401 is generated. The foreground mask 401 is generated by the foreground mask generating portion 25. As has been described with reference to
The processing in step S14, the background mask 301 is generated. The background mask 301 is generated by the background mask generating portion 23. As has been described with reference to
In processing in step S15, the composition adaptive mask 501 is generated. The composition adaptive mask 501 is generated by the composition adaptive mask generating portion 26. As has been described with reference to
The composition adaptive mask 501 generated in such a manner is supplied to the image processing portion 27. Also, the image processing corresponding to the level set by the composition adaptive mask 501 is executed for the data corresponding to the input image, and the resulting data is then outputted to a processing portion (not shown) in the subsequent stage.
The composition mask generating processing executed in the processing in step S11 of the flow chart shown in
In processing in step S32, the composition reliability is calculated. The calculation for the composition reliability may be carried out in any of the composition detecting portion 21 and the composition mask generating portion 22. The composition reliability of 0 to 100% is obtained from the differences between the flatness degrees, the bands, the amplitudes, and the two pieces of the color information which are obtained through the two-division. For the image in which each of the differences is large and thus the composition estimation can be clearly carried out, the composition reliability is set high. On the other hand, for the image in which each of the differences is small and thus the composition estimation is not reliable, the composition reliability is set low.
In processing in step S33, the composition mask 101 is generated by the composition mask generating portion 22. As previously described with reference to
In addition, there may also be generated the composition mask 101 such that as previously described with reference to
Such processing is executed as may be necessary, thereby generating the composition mask 101.
In processing in step S4, stabilizing processing is executed by the composition mask generating portion 22. It is possible that the composition mask 101 is abruptly changed due the variation of the moving image response or the composition detection result. When the composition mask 101 is abruptly changed, for the purpose of preventing the composition mask 101 from being abruptly changed along with that change, such time constant control as to cause the change to be slowly carried out up to the composition mask 101 of the target by executing Infinite Impulse Response (IIR) processing, thereby realizing the stabilization.
By executing such processing, the composition mask 101 is generated and is then supplied to the background mask generating portion 23. After that, the operation proceeds to the processing in step S12. Also, by executing the predetermined pieces of processing as described above, the composition adaptive mask 501 is generated and is then supplied to the image processing portion 27.
In the manner as described above, with the technique of the present disclosure, the mask is used which is obtained by obtaining the minimum values of the composition mask 101 and the flat mask 201 which have been generated based on the composition detection. Therefore, it is possible to suppress (remove away) even the noise which cannot be removed away by the noise reduction for the partial picture, on the picture upper side, such as the scenery, for example, even the noise or the like which cannot be removed away by Random Noise Reduction for the compression strain, or MPEG Noise Reduction, or even the noise which cannot be suppressed away by the flat mask.
For example, there are many flat portions in the partial picture, on the picture upper side, such as the scenery like the image 51 shown in
In addition, with the technique of the present disclosure, the mask is used which is obtained by obtaining the minimum values of the composition mask 101 and the flat mask 201 which have been generated based on the composition detection. Therefore, it is possible to prevent the emphasis of the portion which is out-of-focus (the blurred portion) in the composition such that the depth of field is shallow.
For example, when the super-resolution and the enhancement are strongly applied to the portion which is out-of-focus (the blurred portion) like the image 61 shown in
For example, when the image 61 shown in
In addition, the mask is used which is obtained by adding the foreground mask 401 to the background mask 301. Therefore, even when the composition mask 101 is the mask such that the suppression is uniformly heightened, it is possible to prevent the situation in which it may be impossible to emphasize the edge and texture of the foreground. In other words, the edge and the texture which are to be emphasized are emphasized, whereby it is possible to prevent the situation in which the image becomes the image the whole of which gives a blurred impression to the viewer by executing the image processing.
For example, the edge and the texture are both present in the portion such as the building and the like of the scenery of the image 51 shown in
In addition, with the technique of the present disclosure, it is possible to suppress the noise, and the portion which is out-of-focus both of which are conspicuous in the flat portion on the upper side of the picture of the scenery or the like. With the existing image quality regulation, the noise emphasis and the gradation worsening in that portion are conspicuous. Therefore, it is possible to either only weaken entirely the intensities of the super-resolution and the enhancement or only take measures to make the noise inconspicuous by blurring the entire scareen. According to the technique of the present disclosure, however, since the intensities of the super-resolution and the enhancement can be partially, optimally controlled in consideration of the composition, the effect of the super-resolution and the enhancement can be further heightened for use.
As described above, with the technique of the present disclosure, the composition mask 101 is the rough composition mask which grasps the features of the entire picture in such a way that, for example, when the upper side of the scenery or the like has the top/bottom composition like the long-distance view, the suppression is uniformly heightened as the area of the picture is located on the upper side. Since it is only necessary to make such a composition mask 101, it is possible to obtain the robust effects.
For example, for the detection of the composition, when the fine composition mask which is controlled in accordance with the frequency information for each area is tried to be made, it may be impossible to take in the composition information on the entire picture, or it is caused that the processing boundary is seen. Therefore, the robust property is lowered. However, as described above, according to the technique of the present disclosure, it is possible to obtain the robust effects.
The series of processing described above can be executed either by hardware or by software. When the series of processing is executed by the software, a program composing the software is installed in a computer. Here, the computer, for example, includes a computer incorporated in dedicated hardware, and a general-purpose personal computer which can execute various kinds of functions by installing therein various kinds of programs.
The input portion 1006 is composed of a keyboard, a mouse, a microphone, or the like. The output portion 1007 is composed of a display device, a speaker, or the like. The storage portion 1008 is composed of a hard disk, a non-volatile memory, or the like. The communication portion 1009 is composed of a network interface or the like. The drive 1010 drives a removable media 1011 such as a magnetic disk, an optical disk, a magneto optical disk, or a semiconductor memory.
With the computer configured in the manner as described above, for example, the CPU 1001 loads the program stored in the storage portion 1008 into the RAM 1003 through the input/output interface 1005 and the bus 1004 in order to execute the program, thereby executing the series of processing described above.
The program which the computer (the CPU 1001) executes, for example, can be recorded in a removable media 1011 as a package media or the like to be provided. In addition, the program can be provided through a wired or wireless transmission media such as a Local Area Network (LAN), the Internet, or the digital satellite broadcasting.
In the computer, the program can be installed in the storage portion 1008 through the input/output interface 1005 by mounting the removable media 1011 to the drive 1010. In addition, the program can be received at the communication portion 1009 through the wired or wireless transmission media to be installed in the storage portion 1008. In addition thereto, the program can be previously installed either in the ROM 1002 or in the storage portion 1008.
It is noted that the program which the computer executes either may be a program in accordance with which predetermined pieces of processing are executed in a time series manner along the order described in this specification, or may be a program in accordance with which the predetermined pieces of processing are executed in parallel or at a necessary timing such as when a call is made.
In this specification, the system means the entire apparatus composed of plural devices or units.
It is noted that the embodiments of the present disclosure are by no means limited to the embodiments described above, and various kinds of changes can be made without departing from the subject matter of the present disclosure.
It is noted that the technique of the present disclosure can also adopt the following constitutions.
(1) An image processor including:
a detecting portion detecting a composition of an input image;
a first generating portion generating first information in accordance with an intensity of image processing based on the composition detected by the detecting portion is controlled;
a second generating portion detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled;
a third generating portion detecting a foreground portion of the input image, and generating third information in accordance with which an intensity of image processing for the foreground portion is controlled; and
an image processing portion executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
(2) The image processor described in the paragraph (1), further including:
a fourth generating portion synthesizing the first information and the second information, thereby generating fourth information; and
a fifth generating portion synthesizing the third information and the fourth information, thereby generating fifth information, in which the image processing portion executes the image processing in accordance with an intensity based on the fifth information.
(3) The image processor described in the paragraph (1), further including:
a fourth generating portion synthesizing the first information and the second information by obtaining minimum values of the first information and the second information, thereby generating fourth information; and
a fifth generating portion synthesizing the third information and the fourth information by obtaining maximum values of the third information and the fourth information, thereby generating fifth information, in which the image processing portion executes the image processing in accordance with an intensity based on the fifth information.
(4) The image processor described in any one of the paragraphs (1) to (3), in which the first generating portion sets a maximum value and a minimum value of the intensity of the image processing controlled in accordance with the first information based on a reliability of the composition detected by the detecting portion, and generating the first information falling within a range of the intensity thus set.
(5) The image processor described in any one of the paragraphs (1) to (4), in which when the input image is divided into parts based on the composition detected by the detecting portion, the first generating portion detects a line becoming a boundary of the division and generates the first information in accordance with which the intensity is steeply changed with the line as the boundary.
(6) The image processor described in any one of the paragraphs (1) to (5), in which the image processing which the image processing portion executes is at least one piece of processing of super-resolution processing, enhancing processing, noise reducing processing, S/N ratio improving processing, and depth feel and stereoscopic effect improving processing.
(7) An image processing method for an image processor executing image processing for an input image, the method including:
detecting a composition of an input image;
generating first information in accordance with which an intensity of image processing based on the composition thus detected is controlled;
detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled;
detecting a foreground portion of the input image, and generating third information in accordance with an intensity of image processing for the foreground portion is controlled; and
executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
(8) A program in accordance with which a computer controlling an image processor subjecting an input image to image processing is caused to execute processing, including:
detecting a composition of an input image;
generating first information in accordance with an intensity of image processing based on the composition thus detected is controlled;
detecting a flat portion in which a change in pixel values corresponding to the input image is small, and generating second information in accordance with which an intensity of image processing for the flat portion is controlled;
detecting a foreground portion of the input image, and generating the third information in accordance with which an intensity of image processing for the foreground portion is controlled; and
executing image processing in accordance with an intensity based on the first information, the second information, and the third information.
The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2012-140387 filed in the Japan Patent Office on Jun. 22, 2012, the entire content of which is hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2012-140387 | Jun 2012 | JP | national |