The present technology relates to an image processing apparatus. Specifically, the present technology relates to an image processing apparatus, an imaging apparatus, and an image processing method which correct noise, and a program which causes a computer to execute the method.
In recent years, an imaging apparatus, such as a digital still camera or a digital video camera (for example, a recorder with a camera), which captures a subject, such as a person, to generate a captured image and records the generated captured image has come into wide use. The image captured by the digital imaging apparatus generally includes noise.
Noise of the captured image includes noise (high-frequency noise) which appears randomly in a small number of pixels and can be removed by a filter with a small number of taps, and noise (low-frequency noise) which appears in a wide range of pixels and can be removed only by a filter with a large number of taps.
Low-frequency noise can be removed by processing in a filter with a large number of taps. However, processing by a filter with a large number of taps is heavy. For this reason, a method of simply removing low-frequency noise has been suggested. For example, an image processing method which removes low-frequency noise on the basis of an input image and a reduced image of the input image has been suggested (for example, see JP-A-2004-295361).
In this image processing method, an average value in a predetermined range is compared with a pixel value in the input image to separate noise from a significant signal, and a pixel value with a lot of noise is replaced with replaced data generated from the reduced image, thereby removing low-frequency noise in the input image.
In the related art, replaced data is generated from the reduced image, whereby low-frequency noise in the input image can be removed. However, since replaced data generated from the reduced image is an image having less high-frequency components and low resolution, when replacement is done at an edge or a near edge, resolution may be lowered. Accordingly, it is important to remove noise such that resolution in an image is not damaged.
It is therefore desirable to improve image quality in an image subjected to noise removal processing.
An embodiment of the present technology is directed to an image processing apparatus including a noise-removed image generation unit which, on the basis of an input image and a reduced image obtained by reducing the input image at predetermined magnification, generates a noise-removed image with noise in the input image removed, and a corrected image generation unit which generates, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generates an edge-corrected image on the basis of the noise-removed image and the high-frequency component image, an image processing method, and a program. With this configuration, edge correction is performed on the noise-removed image generated on the basis of the input image and the reduced image using the frequency component of the noise-removed image in the same band as the frequency component to be removed by the band limitation when generating the reduced image.
In the of the present technology, the corrected image generation unit may generate the high-frequency component image by subtraction processing for each pixel between a low-frequency component image primarily having a frequency component to be not removed by the band limitation and the noise-removed image. With this configuration, the high-frequency component image is generated by the subtraction processing for each pixel between the low-frequency component image primarily having the frequency component to be not removed by the band limitation and the noise-removed image.
In the embodiment of the present technology, the noise-removed image generation unit may generate a second noise-removed image by enlarging an image with noise in the reduced image removed at the predetermined magnification and may then generate the noise-removed image by addition processing for each pixel between the second noise-removed image and the input image in accordance with an addition ratio set for each pixel, and the corrected image generation unit may generate the high-frequency component image using the second noise-removed image as the low-frequency component image. With this configuration, the high-frequency component image is generated using the second noise-removed image obtained by enlarging the image with noise in the reduced image removed at the predetermined magnification.
In the embodiment of the present technology, the corrected image generation unit may generate the high-frequency component image using an image obtained by reducing and then enlarging the noise-removed image at the predetermined magnification as the low-frequency component image. With this configuration, the high-frequency component image is generated using the image obtained by reducing and then enlarging the noise-removed image at the predetermined magnification.
In the embodiment of the present technology, the corrected image generation unit may generate the high-frequency component image using an image obtained by reducing and then enlarging the reduced image at the predetermined magnification as the low-frequency component image. With this configuration, the high-frequency component image is generated using the image obtained by reducing and then enlarging the reduced image at the predetermined magnification.
In the embodiment of the present technology, the corrected image generation unit may generate the edge-corrected image by unsharp mask processing on the basis of the noise-removed image and the high-frequency component image. With this configuration, edge correction is performed by the unsharp mask processing.
Another embodiment of the present technology is directed to an image processing apparatus including a reduced image generation unit which generates a reduced image by reducing an input image at predetermined magnification, a noise-removed image generation unit which generates a noise-removed image with noise in the input image removed on the basis of the input image and the reduced image when edge enhancement is performed on the input image, and a corrected image generation unit which generates a high-frequency component image on the basis of the generated reduced image and the noise-removed image when the edge enhancement is performed and generates an edge-corrected image by unsharp mask processing on the basis of the noise-removed image and the high-frequency component image. With this configuration, when edge enhancement is performed, edge correction is performed on the noise-removed image generated on the basis of the input image and the reduced image using the frequency component of the noise-removed image in the same band as the frequency component to be removed by the band limitation when generating the reduced image.
In the another embodiment of the present technology, the corrected image generation unit may generate a second high-frequency component image on the basis of the reduced image and the input image when contrast enhancement is performed on the input image and may generate a contrast-enhanced image by the unsharp mask processing on the basis of the input image and the second high-frequency component image, and the noise-removed image generation unit may generate an image with noise in the contrast-enhanced image removed on the basis of the reduced image and the contrast-enhanced image when the contrast enhancement is performed. With this configuration, when contrast enhancement is performed, noise removal using the reduced image is performed after contrast enhancement is performed by the unsharp mask processing.
Still another embodiment of the present technology is directed to an imaging apparatus including a lens unit which condenses subject light, an imaging device which converts subject light to an electrical signal, a signal processing unit which converts the electrical signal output from the imaging device to a predetermined input image, a noise-removed image generation unit which, on the basis of the an input image and a reduced image obtained by reducing the input image at predetermined magnification, generates a noise-removed image with noise in the input image removed, a corrected image generation unit which generates, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generates an edge-corrected image on the basis of the noise-removed image and the high-frequency component image, and a recording processing unit which compresses and encodes the generated edge-corrected image to generate and record recording data. With this configuration, edge correction is performed on the noise-removed image generated on the basis of the input image and the reduced image using the frequency component of the noise-removed image in the same band as the frequency component to be removed by band limitation when generating the reduced image, and the image subjected to the edge correction is recorded.
The embodiments of the present technology have a beneficial effect of improving image quality in an image subjected to noise removal processing.
Hereinafter, a mode (hereinafter, referred to as an embodiment) for carrying out the present technology will be described. The description will be provided in the following sequence.
1. First Embodiment (image processing control: an example where reduction NR processing and unsharp mask processing are performed using the same reduction ratio)
2. Second Embodiment (image processing control: an example where contrast enhancement of an entire image and reduction NR processing are performed)
3. Modification
The imaging apparatus 100 is an imaging apparatus (for example, a compact digital camera) which captures a subject to generate image data (captured image) and records the generated image data as an image content (still image content or motion image content).
The imaging apparatus 100 includes a lens unit 110, an imaging device 120, a preprocessing unit 130, an YC conversion unit 140, an NR (Noise Reduction) unit 200, and a size conversion unit 150. The imaging apparatus 100 includes a recording processing unit 161, a recording unit 162, a display processing unit 171, a display unit 172, a bus 181, and a memory 182.
The bus 181 is a bus for data transfer in the imaging apparatus 100. For example, when image processing is performed, data which should be temporarily stored is stored in the memory 182 through the bus 181.
The memory 182 temporarily stores data in the imaging apparatus 100. The memory 182 is used as, for example, a work area of each kind of signal processing in the imaging apparatus 100. The memory 182 is realized by, for example, a DRAM (Dynamic Random Access Memory).
The lens unit 110 condenses light (subject light) from the subject. In
The imaging device 120 photoelectrically converts subject light to an electrical signal, and receives subject light and generates an electrical signal. The imaging device 120 is realized by, for example, a solid-state imaging device, such as a CMOS (Complementary Metal Oxide Semiconductor) sensor or a CCD (Charge Coupled Device) sensor. The imaging device 120 supplies the generated electrical signal to the preprocessing unit 130 as an image signal (RAW signal).
The preprocessing unit 130 performs various kinds of signal processing on the image signal (RAW signal) supplied from the imaging device 120. For example, the preprocessing unit 130 performs image signal processing, such as noise removal, white balance adjustment, color correction, edge enhancement, gamma correction, and resolution conversion. The preprocessing unit 130 supplies the image signal subjected to various kinds of signal processing to the YC conversion unit 140.
The YC conversion unit 140 converts the image signal supplied from the preprocessing unit 130 to an YC signal. The YC signal is an image signal including a luminance component (Y) and a red/blue color-difference component (Cr/Cb). The YC conversion unit 140 supplies the generated YC signal to the NR unit 200 through a signal line 209. The YC conversion unit 140 and the preprocessing unit 130 are an example of a signal processing unit described in the appended claims.
The NR unit 200 removes noise included in the image supplied from the YC conversion unit 140 as the YC signal. The NR unit 200 performs noise removal processing using a reduced image and also performs unsharp mask processing for restoring resolution which is lowered during the noise removal processing. Accordingly, the NR unit 200 generates an image in which low-frequency noise is reduced and resolution is satisfactory at an edge and a near edge. In the first embodiment of the present technology, for convenience of description, description will be provided dividing an image into an edge, a near edge, and a flat portion. An edge, a near edge, and a flat portion will be described referring to
The internal configuration of the NR unit 200 will be described referring to
The size conversion unit 150 converts the size of the NR image supplied from the NR unit 200 to the size of an image for recording or the size of an image for display. The size conversion unit 150 supplies the generated image for recording (recording image) to the recording processing unit 161. The size conversion unit 150 supplies the generated image for display (display image) to the display processing unit 171.
The recording processing unit 161 compresses and encodes the image supplied from the size conversion unit 150 to generate recording data. When recording a still image, the recording processing unit 161 compresses the image using an encoding format (for example, JPEG (Joint Photographic Experts Group) system) which is used to compress the still image, and supplies data (still image content) of the compressed image to the recording unit 162. When recording a motion image, the recording processing unit 161 compresses the image using an encoding format (for example, MPEG (Moving Picture Experts Group) system) which is used to compress the motion image, and supplies data (motion image content) of the compressed image to the recording unit 162.
When reproducing an image stored in the recording unit 162, the recording processing unit 161 restores the image by the compression encoding format of the image, and supplies the restored image signal to the display processing unit 171.
The recording unit 162 records recording data (still image content or motion image content) supplied from the recording processing unit 161. The recording unit 162 is realized by, for example, a recording medium (single or a plurality of recording mediums), such as a semiconductor memory (memory card or the like), an optical disc (a BD (Blu-ray Disc), a DVD (Digital Versatile Disc), a CD (Compact Disc), or the like)), or a hard disk. The recording mediums may be embedded in the imaging apparatus 100 or may be detachable from the imaging apparatus 100.
The display processing unit 171 converts the image supplied from the size conversion unit 150 to a signal for display on the display unit 172. For example, the display processing unit 171 converts the image supplied from the size conversion unit 150 to a standard color video signal of an NTSC (National Television System Committee) system, and supplies the converted standard color video signal to the display unit 172. When reproducing the image recorded in the recording unit 162, the display processing unit 171 converts the image supplied from the recording processing unit 161 to a standard color video signal, and supplies the converted standard color video signal to the display unit 172.
The display unit 172 displays the image supplied from the display processing unit 171. For example, the display unit 172 displays a monitor image (live view image), a setup screen of various functions of the imaging apparatus 100, a reproduced image, or the like. The display unit 172 is realized by, for example, a color liquid crystal panel, such as an LCD (Liquid Crystal Display) or an organic EL (Electro-Luminescence).
The preprocessing unit 130, the YC conversion unit 140, the NR unit 200, the size conversion unit 150, the recording processing unit 161, and the display processing unit 171 in the functional configuration are realized by, for example, a DSP (Digital Signal Processor) for image processing which is provided in the imaging apparatus 100.
In
Next, the internal configuration of the NR unit 200 will be described referring to
In
The NR unit 200 includes a high-frequency noise removal unit 210, a reduction NR unit 220, and an edge restoration unit 230.
The high-frequency noise removal unit 210 removes high-frequency noise from among noise included in the image supplied through the signal line 209. High-frequency noise can be removed while the number of taps is set to be small during filter processing when removing noise. High-frequency noise is noise which is generated in terms of pixels, such as one pixel or two pixels.
For example, the high-frequency noise removal unit 210 removes high-frequency noise using a ε filter with a small number of taps. The high-frequency noise removal unit 210 supplies an image with high-frequency noise removed to the reduction NR unit 220 through a signal line 241. Hereinafter, an image with high-frequency noise removed by the high-frequency noise removal unit 210 is referred to as a high-frequency noise-removed image.
The reduction NR unit 220 removes low-frequency noise in an image supplied from the high-frequency noise removal unit 210 using a reduced image of the image. Low-frequency noise is patchy noise which appears in a plurality of adjacent pixels (wide range), and is unable to be removed by a filter with a small number of taps. Low-frequency noise is noise which is not removed by the high-frequency noise removal unit 210, and for example, appears when a dark subject is captured with high sensitivity.
The reduction NR unit 220 includes an image reduction unit 221, a low-frequency noise removal unit 222, an image enlargement unit 223, an addition determination unit 224, and an added image generation unit 225. The reduction NR unit 220 supplies an image with low-frequency noise removed and a reduced image to the edge restoration unit 230. The reduction NR unit 220 is an example of a noise-removed image generation unit described in the appended claims.
The image reduction unit 221 generates a reduced image by reducing the size of the image supplied through the signal line 241 1/N times. For example, the image reduction unit 221 generates a reduced image by reducing the supplied image to ¼ size. The reduction ratio (N) is a value such that a frequency which acts as a criterion (boundary) for band limitation in a section of a major frequency component at a near edge (a frequency such that a frequency component equal to or greater than the frequency is cut) is set. The image reduction unit 221 supplies the generated reduced image to the low-frequency noise removal unit 222.
The low-frequency noise removal unit 222 removes noise which is included in the reduced image supplied from the image reduction unit 221. Since high-frequency noise is removed in the high-frequency noise removal unit 210, low-frequency noise included in the image is removed by noise removal in the low-frequency noise removal unit 222. As a noise removal method, various methods are considered, and for example, the low-frequency noise removal unit 222 removes noise using a ε filter in the same manner as in the high-frequency noise removal unit 210. Since an image subjected to noise removal processing is a reduced image, the generation range (number of pixels) of low-frequency noise becomes smaller than before reduction (¼). For this reason, low-frequency noise can be removed by a filter with a small number of taps by filter processing of the reduced image. The low-frequency noise removal unit 222 supplies the reduced image with low-frequency noise removed to the image enlargement unit 223.
The image enlargement unit 223 enlarges the reduced image supplied from the low-frequency noise removal unit 222 N times to convert the reduced image to an image of original size. For example, when the reduced image is reduced ¼ times in the image reduction unit 221, the image enlargement unit 223 enlarges the size of the reduced image four times. Hereinafter, an image which is enlarged by the image enlargement unit 223 after low-frequency noise is removed by the low-frequency noise removal unit 222 is referred to as a low-frequency noise-removed image. The image enlargement unit 223 supplies the generated image (hereinafter, referred to as a low-frequency noise-removed image) to the addition determination unit 224, the added image generation unit 225, and the edge restoration unit 230 through a signal line 242.
The addition determination unit 224 determines a blending ratio (addition ratio) of the high-frequency noise-removed image supplied from the high-frequency noise removal unit 210 through the signal line 241 and the low-frequency noise-removed image supplied from the image enlargement unit 223 through the signal line 242 for each pixel value (for each pixel). As a method which calculates the addition ratio, various methods are considered. For example, a method which determines the addition ratio for each pixel using the high-frequency noise-removed image or the low-frequency noise-removed image, a method which determines the addition ratio from external information (imaging conditions, such as imaging in a flesh color definition mode), or the like is considered. A method which determines the addition ratio for each pixel using the high-frequency noise-removed image or the low-frequency noise-removed image and modulates the value using external information, or the like is also considered. As an example, description will be provided assuming that the addition ratio is calculated for each pixel using the high-frequency noise-removed image and the low-frequency noise-removed image.
The addition determination unit 224 calculates the addition ratio S such that “0≦S≦1” is satisfied. For example, the addition determination unit 224 calculates the addition ratio S for each pixel using Expression (1).
S=|(PIN−PLOW)×f| (1)
PIN is a pixel value in the high-frequency noise-removed image. PLOW is a pixel value in the low-frequency noise-removed image. f is a conversion factor.
In the calculation of the addition ratio S using Expression 1, when the conversion factor f is set such that the calculation result of the left side may become greater than “1.0”, saturation processing is performed with 1.0. If the addition ratio S is calculated using Expression 1, the addition ratio S becomes a value close to “1” at an edge of an image, becomes a value close to “0” in a flat portion, and becomes “0<S<1” at a near edge.
The addition determination unit 224 calculates the addition ratio for all pixel values constituting an image (high-frequency noise-removed image) of original size, and supplies the calculated addition ratio to the added image generation unit 225.
The added image generation unit 225 adds the high-frequency noise-removed image and the low-frequency noise-removed image in accordance with the addition ratio, and generates an image (image after reduction NR) with noise removed. For example, the added image generation unit 225 calculates a pixel value (PNR) in the image after reduction NR for each pixel using Expression (2).
P
NR
=S×P
IN+(1−S)×PLOW (2)
From Expression 2, when the addition ratio S is “1”, the pixel value in the high-frequency noise-removed image is output directly as the pixel value of the image after reduction NR. When the addition ratio S is “0”, the pixel value in the low-frequency noise-removed image is output directly as the pixel value of the image after reduction NR.
That is, from Expression 2, in regard to the pixel values at the edge at which the addition ratio S is a value close to “1”, the ratio of the pixel values in the high-frequency noise-removed image increases. In regard to the pixel values in the flat portion in which the addition ratio S is a value close to “0”, the ratio of the pixel values in the low-frequency noise-removed image increases. In a near-edge portion in which the addition ratio S becomes “0<S<1”, the pixel values in the high-frequency noise-removed image and the pixel values in the low-frequency noise-removed image become the pixel values which are blended in accordance with the addition ratio S. In this way, the addition ratio S represents the level of edge, as the level is high, the ratio resulting from the high-frequency noise-removed image increases.
The added image generation unit 225 supplies the image (image after reduction NR) generated by addition to the edge restoration unit 230 through a signal line 243.
The edge restoration unit 230 restores resolution at the edge and the near edge in the image after reduction NR. Since the image after reduction NR is generated by blending the high-frequency noise-removed image and the low-frequency noise-removed image, high-frequency noise and low-frequency noise are reduced. Meanwhile, as the ratio of the pixel value of the low-frequency noise-removed image is high, resolution (high-frequency component) is lowered. Accordingly, the edge restoration unit 230 restores resolution at the edge and the near edge by unsharp mask processing.
The edge restoration unit 230 includes a subtractor 231, a gain setting unit 232, a difference adjustment unit 233, and an adder 234. The edge restoration unit 230 is an example of a corrected image generation unit described in the appended claims.
The subtractor 231 performs subtraction with the image after reduction NR supplied from the added image generation unit 225 through the signal line 243 and the low-frequency noise-removed image supplied from the image enlargement unit 223 through the signal line 242, and calculates a difference value for unsharp mask processing for each pixel. The subtractor 231 supplies the calculated difference value to the difference adjustment unit 233 through a signal line 244.
The gain setting unit 232 determines a value (gain) which adjusts the difference value for each pixel. As a method which calculates the gain, various methods are considered, and for example, a method which determines the gain for each pixel using the image after reduction NR or the low-frequency noise-removed image, a method which determines the gain from external information, such as lens characteristics, or the like is considered. A method which determines the gain for each pixel using the image after reduction NR or the low-frequency noise-removed image and modulates the gain using external information, or the like is considered.
As an example, it is assumed that the gain is determined on the basis of the positive/negative and the magnitude of the value of the difference between the image after reduction NR and the low-frequency noise-removed image. If the gain is determined in this way, for example, adjustment can be performed such that the level of enhancement by unsharp mask processing decreases in a pixel value in which the difference is positive, and the level of enhancement by unsharp mask processing increases in a pixel value in which the difference is negative (see
The difference adjustment unit 233 adjusts the difference value supplied from the subtractor 231 through the signal line 244 on the basis of the gain supplied from the gain setting unit 232. For example, the difference adjustment unit 233 calculates a difference value E subjected to gain adjustment for each pixel value using Expression (3).
E=D×G (3)
D is a difference value and is a value of the calculation result of PNR−PLOW by the subtractor 231. G is a gain set by the gain setting unit 232.
The difference adjustment unit 233 performs gain adjustment on the difference value for each pixel using Expression 3, and supplies the difference value subjected to gain adjustment to the adder 234.
The adder 234 generates an image with an edge restored on the basis of the image after reduction NR supplied from the added image generation unit 225 through the signal line 243 and the difference value after gain adjustment supplied from the difference adjustment unit 233. For example, the difference adjustment unit 233 calculates a pixel value Pout using Expression 4 and generates an image (NR image) with an edge restored.
P
out
=P
NR
+E (4)
In this way, the difference value subjected to gain adjustment is added to the pixel values of the image after reduction NR, whereby unsharp mask processing is performed and resolution at the edge and the near edge is restored. The adder 234 outputs an image (NR image) having the added pixel values from the NR unit 200 through the signal line 201.
Next, an edge, a near edge, and a flat portion in an image will be described referring to
In the image 310, a black line is drawn in an image of a white background, the white background corresponds to a flat portion (flat portion 311), the black line corresponds to an edge (edge 313), and a region with minute dots at the boundary of the white background and the black line corresponds to a near edge (near edge 312). As shown in the distribution waveform 314, in the flat portion 311, there is little difference in intensity of the pixel value from a surrounding pixel. As shown in the distribution waveform 314, at the edge 313, there is a large difference in the intensity of the pixel value from the pixel of the flat portion 311, and at the near edge 312, the pixel value is transited so as to keep the difference in the pixel value between the edge 313 and the flat portion 311.
The photograph 320 is a photograph in which a mark for representing an edge or a near edge is not added at the boundary between the building and the sky, and the photograph 321 is a photograph in which a mark for representing an edge or a near edge is added. At the boundary between the building and the sky, the edge corresponds to the boundary between the building and the sky. Near the edge corresponds to the near edge, and the flat portion corresponds to the region of the sky (the flat portion 331 of the photograph 321). In the photograph 321, an edge is represented by a black solid line (edge 333), and a near edge is represented by a dotted-line region (near edge 332).
In this way, the captured image includes the edge, the near edge, and the flat portion. The edge and the near edge include high-frequency components, and when removing low-frequency noise using a reduced image, if the image is replaced with a reduced image, the high-frequency components are removed and the image is blurred. For this reason, the reproduction of the high-frequency components at the edge and the near edge is important.
Next, reduction NR processing and unsharp mask processing by the NR unit 200 will be described referring to
In graphs shown in
In a graph 411 shown in
In a graph 412 shown in
In a graph 413 shown in
In a graph 414 shown in
In the graph 415 shown in
In a graph 416 shown in
In a graph 417 shown in
Next, image processing (reduction NR processing and unsharp mask processing) in the NR unit 200 will be described referring to
In
If noise removal is performed using this image, noise in a frequency component (section W11) lower than 1/Nfs is removed. After noise is removed, even if the image is returned to original size by the image enlargement unit 223, a frequency component (section W12) higher than 1/Nfs remains cut. Accordingly, the frequency components of the low-frequency noise-removed image are constituted only by frequency components (section 11) lower than 1/Nfs, and there are no frequency component (section W12) higher than 1/Nfs.
If the two images are added (blended) in accordance with the addition ratio S, a frequency component (a section W21 of
In regard to frequency components (a section W32 of
Next, the relationship between three regions (flat portion, near edge, and edge) of an image and image processing will be described referring to
In
As shown in
Next, the near edge will be described. As shown in
In the frequency components higher than 1/Nfs at the near edge of the difference image, since there are no frequency components higher than 1/Nfs in the image after reduction NR, components from the high-frequency noise-removed image remain in the difference image. When generating the image after reduction NR, since blending is made using the addition ratio, the addition ratio (level of edge) is reflected in the pixel values of the difference image corresponding to the remaining components.
Next, the edge will be described. As shown in
In this way, band limitation (reduction ratio) when generating the low-frequency noise-removed image matches band limitation (reduction ratio) when generating the difference image (1/Nfs in
In this way, the unsharp mask processing is performed, whereby appropriate enhancement (contour enhancement) is performed only at the near edge and the edge. That is, resolution at the near edge which is lowered by the reduction NR processing can be restored.
The graph shown in
In this way, the level of edge determination during the reduction NR processing can be equal as the level of edge determination during the unsharp mask processing, thus appropriate enhancement of the near edge and the edge can be performed.
As shown in
As shown in
As shown in
Next, the operation of the NR unit 200 according to the first embodiment of the present technology will be described referring to the drawings.
First, it is determined whether or not to start image processing (Step S901), and when it is determined not to start the image processing, it waits for starting the image processing.
When it is determined to start image processing (Step S901), an image (high-frequency noise-removed image) with high-frequency noise removed is generated by the high-frequency noise removal unit 210 (Step S902). For example, when image data to be processed is supplied, it is determined to start image processing, and the high-frequency noise-removed image is generated by the high-frequency noise removal unit 210.
Next, an image (reduced image) which is obtained by reducing (×1/N) the high-frequency noise-removed image is generated by the image reduction unit 221 (Step S903). Thereafter, low-frequency noise in the reduced image is removed by the low-frequency noise removal unit 222 (Step S904). Subsequently, an image (low-frequency noise-removed image) which is obtained by enlarging (×N) the reduced image with low-frequency noise removed is generated by the image enlargement unit 223 (Step S905). Step S904 is an example of generating a noise-removed image described in the appended claims.
The addition ratio is calculated by the addition determination unit 224 (Step S906). Thereafter, an image (image after reduction NR) which is obtained by blending the high-frequency noise-removed image and the low-frequency noise-removed image on the basis of the addition ratio is generated by the added image generation unit 225 (Step S907).
Subsequently, the difference (difference image) between the low-frequency noise-removed image and the image after reduction NR is calculated by the subtractor 231 (Step S908). Thereafter, a value (gain) which adjusts the difference value for addition during the unsharp mask processing is set by the gain setting unit 232 (Step S909). Subsequently, the difference value is adjusted on the basis of the set gain by the difference adjustment unit 233 (Step S910). An image (output image) which is obtained by adding the adjusted difference value and the image after reduction NR is generated by the adder 234 (Step S911), and the processing procedure of the image processing by the NR unit 200 ends. Steps S908 to S911 are an example of generating a corrected image described in the appended claims.
In this way, according to the first embodiment of the present technology, the reduced images which are used in the reduction NR processing and the unsharp mask processing have the same reduction ratio, it is possible to remove low-frequency noise, and to appropriately enhance the edge and the near edge. That is, according to the first embodiment of the present technology, it is possible to improve image quality in an image subjected to noise removal processing.
In the first embodiment of the present technology, an example where the reduced images which are used in the reduction NR processing and the unsharp mask processing have the same reduction ratio, and the two kinds of processing have the same level of edge determination has been described. Accordingly, it becomes possible to enhance the edge and the near edge in the unsharp mask processing.
There may be an attempt to enhance the contrast of the entire image in the unsharp mask processing depending on image quality of the captured image. However, in the method according to the first embodiment of the present technology, it is not possible to enhance the contrast of the entire image.
Accordingly, in a second embodiment of the present technology, an example where the contrast of the entire image is enhanced and low-frequency noise is removed during the reduction NR processing will be described referring to
The NR unit 600 is a modification of the NR unit 200 shown in
The NR unit 600 is different from the NR unit 200 of
In the NR unit 600, an edge restoration unit 630 which performs the unsharp mask processing includes an image enlargement unit 236 which enlarges the reduced image supplied from the image reduction unit 221, in addition to the respective parts of the edge restoration unit 230 of
The image reduction unit 221 shown in
As shown in
Next, the operation of the NR unit 600 according to the second embodiment of the present technology will be described referring to the drawings.
First, it is determined whether or not to start image processing (Step S931), and when it is determined not to start the image processing, it waits for starting the image processing.
When it is determined to start image processing (Step S931), an image (high-frequency noise-removed image) with high-frequency noise removed is generated by the high-frequency noise removal unit 210 (Step S932).
Next, an image (reduced image) which is obtained by reducing (×1/N) the high-frequency noise-removed image is generated by the image reduction unit 221 (Step S933). Subsequently, an image (enlarged image) which is obtained by enlarging (×N) the reduced image is generated by the image enlargement unit 236 (Step S934). The difference (difference image) between the high-frequency noise-removed image and the enlarged image is calculated by the subtractor 231 (Step S935).
Thereafter, a value (gain) which adjusts the difference value for addition in the unsharp mask processing is set by the gain setting unit 232 (Step S936). Subsequently, the difference value is adjusted on the basis of the set gain by the difference adjustment unit 233 (Step S937). An image (contrast-enhanced image) which is obtained by adding the adjusted difference value and the image after reduction NR is generated by the adder 234 (Step S938).
Subsequently, low-frequency noise in the reduced image is removed by the low-frequency noise removal unit 222 (Step S939). An image (low-frequency noise-removed image) which is obtained by enlarging (×N) the reduced image with low-frequency noise removed is generated by the image enlargement unit 223 (Step S940).
The addition ratio is calculated by the addition determination unit 224 (Step S941). Thereafter, an image (output image) which is obtained by blending the contrast-enhanced image and the low-frequency noise-removed image on the basis of the addition ratio is generated by the added image generation unit 225 (Step S942), and the processing procedure of the image processing by the NR unit 200 ends.
In this way, according to the second embodiment of the present technology, it is possible to enhance the contrast of the entire image in the unsharp mask processing and to remove low-frequency noise. That is, according to the second embodiment of the present technology, it is possible to improve image quality in an image subjected to noise removal processing.
Although in
As shown in
As described in the first and second embodiments of the present technology, if band limitation in the reduction NR processing and the unsharp mask processing is the same, it is possible to enhance only the edge and the near edge. As a method which makes the band limitation the same, a method other than those described in the first and second embodiments of the present technology may be considered.
Accordingly, in
The NR unit 700 is a modification of the NR unit 200 shown in
The edge restoration unit 730 includes an image reduction unit 731 which reduces the image after reduction NR 1/N times, and an image enlargement unit 732 which enlarges the reduced image after reduction NR N times, in addition to the configuration of the edge restoration unit 230 of the
As shown in
The NR unit 750 is a modification of the NR unit 200 shown in
In the NR unit 750, since the reduced image with the same reduction ratio is used to perform the unsharp mask processing after the reduction NR processing, as in the first embodiment of the present technology, it is possible to appropriately enhance the edge and the near edge.
In addition to the modifications shown in
Although in the embodiments of the present technology, an example where processing is performed on an image subjected to YC conversion has been described, the present technology is not limited thereto, and an RGB image may be used directly and NR processing may be performed on the basis of an RGB signal. Although an example where correction processing is performed on the luminance component (Y) after YC conversion has been described, the present technology is not limited thereto, and NR processing may be performed on the basis of the color difference signal (Cr, Cb).
As described above, according to the embodiments of the present technology, the reduced images which are used in the reduction NR processing and the unsharp mask processing have the same reduction ratio, whereby it is possible to improve image quality in an image subjected to noise removal processing.
The foregoing embodiments are examples for implementing the present technology, and the items of the embodiments and the inventive subject matters of the appended claims have the correspondence relationship. Similarly, the inventive subject matters of the appended claims and the items of the embodiments of the present technology to which the same names as those thereof are given have the correspondence relationship. However, the present technology is not limited to the embodiments, and may be modified in various forms of the embodiments within the scope without departing from the gist of the present technology.
The processing procedure described in the foregoing embodiments may be understood as a method having a series of procedure or may be understood as a program which causes a computer to execute a series of procedure or a recording medium which stores the program. As the recording medium, for example, a hard disk, a CD (Compact Disc), an MD (Mini Disc), a DVD (Digital Versatile Disk), a memory card, a Blu-ray Disc (Registered Trademark), or the like may be used.
The present technology may be configured as follows.
(1) An image processing apparatus including:
a noise-removed image generation unit which, on the basis of an input image and a reduced image obtained by reducing the input image at predetermined magnification, generates a noise-removed image with noise in the input image removed; and
a corrected image generation unit which generates, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generates an edge-corrected image on the basis of the noise-removed image and the high-frequency component image.
(2) The image processing apparatus described in (1),
wherein the corrected image generation unit generates the high-frequency component image by subtraction processing for each pixel between a low-frequency component image primarily having a frequency component to be not removed by the band limitation and the noise-removed image.
(3) The image processing apparatus described in (2),
wherein the noise-removed image generation unit generates a second noise-removed image by enlarging an image with noise in the reduced image removed at the predetermined magnification and then generates the noise-removed image by addition processing for each pixel between the second noise-removed image and the input image in accordance with an addition ratio set for each pixel, and
the corrected image generation unit generates the high-frequency component image using the second noise-removed image as the low-frequency component image.
(4) The image processing apparatus described in (2),
wherein the corrected image generation unit generates the high-frequency component image using an image obtained by reducing and then enlarging the noise-removed image at the predetermined magnification as the low-frequency component image.
(5) The image processing apparatus described in (2),
wherein the corrected image generation unit generates the high-frequency component image using an image obtained by reducing and enlarging the reduced image at the predetermined magnification as the low-frequency component image.
(6) The image processing apparatus described in (1),
wherein the corrected image generation unit generates the edge-corrected image by unsharp mask processing on the basis of the noise-removed image and the high-frequency component image.
(7) An image processing apparatus including:
a reduced image generation unit which generates a reduced image by reducing an input image at predetermined magnification;
a noise-removed image generation unit which generates a noise-removed image with noise in the input image removed on the basis of the input image and the reduced image when edge enhancement is performed on the input image; and
a corrected image generation unit which generates a high-frequency component image on the basis of the generated reduced image and the noise-removed image when the edge enhancement is performed and generates an edge-corrected image by unsharp mask processing on the basis of the noise-removed image and the high-frequency component image.
(8) The image processing apparatus described in (7),
wherein the corrected image generation unit generates a second high-frequency component image on the basis of the reduced image and the input image when contrast enhancement is performed on the input image and generates a contrast-enhanced image by the unsharp mask processing on the basis of the input image and the second high-frequency component image, and
the noise-removed image generation unit generates an image with noise in the contrast-enhanced image removed on the basis of the reduced image and the contrast-enhanced image when the contrast enhancement is performed.
(9) An imaging apparatus including:
a lens unit which condenses subject light;
an imaging device which converts subject light to an electrical signal;
a signal processing unit which converts the electrical signal output from the imaging device to a predetermined input image;
a noise-removed image generation unit which, on the basis of the input image and a reduced image obtained by reducing the input image at predetermined magnification, generates a noise-removed image with noise in the input image removed;
a corrected image generation unit which generates, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generates an edge-corrected image on the basis of the noise-removed image and the high-frequency component image; and
a recording processing unit which compresses and encodes the generated edge-corrected image to generate and record recording data.
(10) An image processing method including:
on the basis of an input image and a reduced image obtained by reducing the input image at predetermined magnification, generating a noise-removed image with noise in the input image removed; and
generating, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generating an edge-corrected image on the basis of the noise-removed image and the high-frequency component image.
(11) A program which causes a computer to execute:
on the basis of an input image and a reduced image obtained by reducing the input image at predetermined magnification, generating a noise-removed image with noise in the input image removed,
generating, from the noise-removed image, a high-frequency component image primarily having a frequency component of the noise-removed image in the same band as a frequency component to be removed by band limitation in the reduction at the predetermined magnification and generating an edge-corrected image on the basis of the noise-removed image and the high-frequency component image.
The present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2012-138511 filed in the Japan Patent Office on Jun. 20, 2012, the entire contents of which are hereby incorporated by reference.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2012-138511 | Jun 2012 | JP | national |