Video Auto Enhancing Algorithm

Information

  • Patent Application
  • 20090091661
  • Publication Number
    20090091661
  • Date Filed
    December 15, 2008
    16 years ago
  • Date Published
    April 09, 2009
    15 years ago
Abstract
A method for enhancing a video includes applying a linear histogram stretch, a gamma correction, and sigma row, column, and temporal filters to the video.
Description
FIELD OF INVENTION

This invention relates to methods for enhancing video.


DESCRIPTION OF RELATED ART

Digital video cameras commonly employ CMOS or CCD image sensors to capture digital video. In poor lighting, the digital video is often dark and typically includes video noise that appears as graininess, fuzz, snow, or dancing specks. This is because in the limited time allotted to each frame, the image sensors may not have sufficient time for a proper exposure to capture the frame. Thus, what is needed is a method that enhances the video to improve brightness and reduce video noise.


SUMMARY

In one embodiment of the invention, a method for enhancing a video includes applying a gamma correction to a current frame with an optimal gamma that maximizes the entropy of the current frame. The optimal gamma parameter is determined by searching in a range based on the change in the mean value of brightness between a preceding frame and the current frame.


In one embodiment of the invention, the method further includes determining a difference in histogram properties between a preceding frame and a current frame, and applying a linear histogram stretch to the current frame with parameters determined for the current frame if the difference is less than a first threshold.


In one embodiment of the invention, the method further includes applying a sigma row filter to a pixel in a current frame based on the brightness of an adjacent pixel in the same row in the current frame, applying a sigma column filter to the pixel based on the brightness of an adjacent pixel in the same column in the current frame, and applying a sigma temporal filter to the pixel based on the brightness of an adjacent pixel at the same row and column in a preceding frame.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a flowchart of a method to enhance brightness and reduce video noise in a video in one embodiment of the invention.



FIG. 2 is a flowchart of a linear transformation method to enhance brightness in a video in one embodiment of the invention.



FIG. 3 is a flowchart of a nonlinear transformation method to enhance brightness in a video in one embodiment of the invention.



FIG. 4 is a flowchart of a filtering method to reduce video noise in s a video in one embodiment of the invention.





Use of the same reference numbers in different figures indicates similar or identical elements.


DETAILED DESCRIPTION


FIG. 1 is a flowchart of a method 100 for enhancing a video in one embodiment of the invention. Method 100 is typically implemented with software on a general purpose computer and operates on a YCrCb color space.


In step 102, brightness enhancement is applied to the current frame. In one embodiment, the brightness enhancement includes a linear histogram stretch as described later in reference to FIG. 2, and a gamma correction as described later in reference to FIG. 3. After brightness enhancement, the current frame is passed to the next process.


In step 104, video noise reduction is applied to the current frame to suppress video noise that is enhanced along with the brightness in step 102. In one embodiment, the video noise reduction includes sigma filtering as described later in reference to FIG. 4.


In step 106, it is determined if the last frame of the video has been reached. If not, step 106 is followed by step 108. If the last frame has been reached, then step 106 is followed by step 108 that ends method 100.


In step 108, the next frame in the video is selected for processing. Step 108 is followed by step 102 and method 100 repeats until the entire video has been processed.



FIG. 2 is flowchart of a linear histogram stretch method 200 for enhancing the brightness of a frame in the video in one embodiment of the invention.


In step 202, the parameters of the linear histogram stretch to be applied to the current frame are determined. The parameters of the linear histogram stretch are determined as follows:










a
=

255

max
-
min



,




(
1
)







b
=


-
a

·
min


,




(
2
)







where a and b are the parameters for the linear histogram stretch, max is a maximum brightness above which pixels in a frame are set to a maximum brightness (e.g., 255), and min is a minimum brightness below which pixels in the frame are set to a minimum brightness (e.g., 0). In one embodiment, max is set so that 1% of the total number of pixels in the frame will be set to 255, and min is set so that 1% of the total number of pixels in the frame will be set to 0.


The linear histogram stretch is defined as follows:






Y=a·Y
0
+b,  (3)


where Y is the new brightness (i.e., luminance) of a pixel in a frame, and Y0 is an original brightness of the pixel. The linear histogram stretch is applied to each pixel in the frame.


Conventionally, parameters a and b are calculated for each frame. This may cause the brightness of the frames to vary noticeably to the viewers (e.g., appearing as sharp flashes in the video). Thus, the following steps are implemented to subtly enhance the brightness of the frames.


In step 203, it is determined if the current frame is the first frame in the video. If so, step 203 is followed by step 210. Otherwise step 203 is followed by step 204.


In step 204, it is determined if the difference in histogram properties between the current frame and the preceding frame in the video is less than a first threshold. If the difference in histogram properties is less than the first threshold, then the brightness enhancement with parameters a and b determined for the current frame will preserve the continuity in brightness of the video. Otherwise parameters a and b determined for the preceding frame will be used.


In one embodiment, the difference in histogram properties is determined as follows:





ΔP=|P1−P0|/P0′  (4)


where ΔP is the difference in the histogram properties, P1 is the histogram property of the original current frame (i.e., without any enhancement), and P0 is the histogram property of the original preceding frame (i.e., without any enhancement). The histogram property is defined as:










P
=




j
=
o

255








p
j

·



j
-
mean






,




(
5
)







where P is the histogram property of a frame, pj is the total number of pixels with a brightness of j in a frame, and mean is the mean value of the brightness of the frame. The histogram property is the degree of histogram concentricity of a frame.


If ΔP is less than the first threshold, then step 204 is followed by step 210. Otherwise step 204 is followed by step 206. In one embodiment, the first threshold is 0.02.


In step 206, it is determined if ΔP is greater than or equal to the first threshold and less than a second threshold. If so, step 206 is followed by step 208. Otherwise step 206 is followed by step 212. In one embodiment, the second threshold is 0.2.


In step 208, it is determined if the difference in parameter a determined for the current frame and parameter a determined for the preceding frame is less than a third threshold. If so, then step 208 is followed by step 210. Otherwise step 212 is followed by step 212. In one embodiment, the third threshold is 0.15.


If the current frame satisfies the conditions of steps 206 and 208, then the brightness enhancement with parameters a and b determined for the current frame will preserve the continuity in brightness of the video. Otherwise parameters a and b determined for the preceding frame will be used.


In step 210, linear histogram stretch is applied to the pixels in the current frame with parameters a and b determined for the current frame. Unlike conventional linear histogram stretch, the other components of the color space are also stretched along with the brightness as follows:






Y=a·Y
0
+b,  (3)






Cr=a·Cr
0,  (6)






Cb=a·Cb
0,  (7)


where Cr is the new red chrominance, Cr0 is the original red chrominance, Cb is the new blue chrominance, and Cb0 is the original blue chrominance. By stretching the other color components along with brightness, the color of the frame will not appear faded. Step 210 is followed by step 214, which ends method 200 for the current frame.


In step 212, linear histogram stretch is applied to the pixels in the current frame with parameters a and b determined for the preceding frame. Like step 210, the other components of the color space are also stretched along with the brightness. Step 212 is followed by step 214, which ends method 200 for the current frame.



FIG. 3 is flowchart of a gamma correction method 300 for enhancing the brightness of a frame in the video in one embodiment of the invention.


In step 302, it is determined if the current frame is the first frame in the video. If so, step 302 is followed by step 304. Otherwise step 302 is followed by step 306.


In step 304, an optimal gamma that maximizes entropy of the current frame is determined. By maximizing the entropy of the current frame, the distribution of pixel gray levels will be optimized and the details hidden in shadows will be enhanced. In one embodiment, the entropy of an enhanced image can be defined as follows:










E
=




j
=
1

255








P
j

·
γ
·


(


j
+
1

256

)


γ
-
1





,




(
8
)







where E is the entropy of the enhanced frame, j is a brightness level, Pj is a percentage of pixels in the original frame having brightness j, and γ is a variable gamma. In one embodiment, variable gamma γ is varied in a range from 0.65 to 1.2 to search for the optimal gamma that maximizes the entropy of the current frame. Step 304 is followed by step 312.


Conventionally, the optimal gamma is calculated for each frame. This may cause the brightness of the frames to vary noticeably to the viewers (e.g., appearing as sharp flashes in the video). Thus, the following steps are implemented to subtly enhance the brightness of the frames.


In step 306, the difference in the mean value of the brightness between the current frame and the preceding frame is determined. The difference is determined as follows:





Δm=|m1−m0|,  (9)


where Δm is the difference in the mean value of the brightness between the two frames, m1 is the mean value of the brightness in the current frame, and m0 is the mean value of the brightness in the preceding frame. Step 306 is followed by step 308.


In step 308, a reduced range for variable gamma γ is determined as follows:





0−Δm·0.02,γ0+Δm·0.02],  (10)


where γ0 is the optimal gamma determined for the preceding frame. Step 308 is followed by step 310.


In step 310, an optimal gamma within the reduced range that maximizes the entropy of the current frame is determined. Step 310 is followed by step 312.


In step 312, gamma correction is applied to the current frame with the optimal gamma.


In one embodiment, gamma correction is applied as follows:










Y
=

255
·


(


Y
0

255

)


γ
1




,




(
11
)







where Y is the new brightness of a pixel in a frame, Y0 is an original brightness of the pixel, and γ1 is the optimal gamma for the current frame. Step 312 is followed by step 314, which ends method 300.



FIG. 4 is flowchart of a sigma filtering method 400 for reducing video noise in the video in one embodiment of the invention. A conventional three dimensional (3D) sigma filter can be applied to reduce image noise, sharpens regions, preserve edges, and retain thin lines. The conventional 3D sigma filter can be applied as follows:











v


(

x
,
y
,
z

)


=


1
Nc







(

l
,
m
,
n

)



Nc


(

x
,
y
,
z

)






v


(

l
,
m
,
n

)





,




(
12
)







where x and y are the horizontal and vertical coordinates of the pixel in a frame, z is the temporal coordinate of the pixel in the video, v is the new brightness of the pixel located at coordinates (x, y, z). Nc is the number of pixels (l,m,n) that satisfies the following conditions: (1) the pixel (l,m,n) is selected from a neighborhood N(x,y,z) of (2w+1)×(2h+1)×(2l+1) around (x,y,z); and (2) the brightness of pixel (l,m,n) is within a noise variance 2σ of the brightness of the pixel (x,y,z). Nc is mathematically defined as follows:





(l,m,n)εNc(x,y,z) if (l,m,n)εN(x,y,z) and |v(l,m,n)−v(x,y,z)|<2σ,  (13)


To improve processing speed, the conventional 3D sigma filter has been separated along the three dimensions into a row filter, a column filter, and a temporal filter. Furthermore, only the pixels that have been previously processed are used to determine the new brightness of the pixel.


In step 402, a row filter is applied to the current pixel. The row filter is applied as follows:











v
1



(

x
,
y
,
z

)


=

{









[



v
1



(


x
-
1

,
y
,
z

)


+









v


(

x
,
y
,
z

)


]

/
2




,





v


(


x
-
1

,
y
,
z

)




Nc


(

x
,
y
,
z

)









v


(

x
,
y
,
z

)


,





v


(


x
-
1

,
y
,
z

)




Nc


(

x
,
y
,
z

)






,






(
14
)








v


(

l
,
m
,
n

)






Nc


(

x
,
y
,
z

)







if









v


(

l
,
m
,
n

)


-

v


(

x
,
y
,
z

)






<

2

σ



,




(
15
)







where v1 is the first new brightness of the pixel at (x, y, z) based on (1) the first new brightness of an adjacent pixel in the same row in the frame and (2) the original brightness of the pixel, and (x−1, y, z) are the coordinates of the adjacent pixel in the same row. In one embodiment, noise variance σ is 10.


In essence, the difference in brightness of the current pixel and the adjacent pixel in the same row is determined. If the difference is less than 2σ, then the average brightness of the current pixel and the adjacent pixel is set as the first new brightness of the pixel. Otherwise, the first mew brightness of the pixel is set equal to the original brightness of the pixel.


In step 404, a column filter is applied to the current pixel. The column filter is applied as follows:











v
2



(

x
,
y
,
z

)


=

{









[



v
2



(

x
,

y
-
1

,
z

)


+










v
1



(

x
,
y
,
z

)


]

/
2




,






v
1



(

x
,

y
-
1

,
z

)




Nc


(

x
,
y
,
z

)










v
1



(

x
,
y
,
z

)


,






v
1



(

x
,

y
-
1

,
z

)




Nc


(

x
,
y
,
z

)






,






(
16
)







where v2 is the second new brightness of the pixel is based on (1) the second new brightness of an adjacent pixel in the same column in the frame and (2) the first new brightness of the pixel, and (x, y−1, z) are the coordinates of the adjacent pixel in the same column.


In essence, the difference in brightness of the current pixel and the adjacent pixel in the same column is determined. If the difference is less than 2σ, then the average brightness of the current pixel and the adjacent pixel is set as the second new brightness of the pixel. Otherwise, the second new brightness of the pixel is set equal to the first new brightness of the pixel determined in step 402.


In step 406, the temporal filter is applied to the current pixel. The temporal filter is applied as follows:











v
3



(

x
,
y
,
z

)


=

{









[



v
3



(

x
,
y
,

z
-
1


)


+










v
2



(

x
,
y
,
z

)


]

/
2




,






v
2



(

x
,
y
,

z
-
1


)




Nc


(

x
,
y
,
z

)










v
2



(

x
,
y
,
z

)


,






v
2



(

x
,
y
,

z
-
1


)




Nc


(

x
,
y
,
z

)






,






(
17
)







where v3 is a third new brightness of the pixel based on (1) the third new brightness of an adjacent pixel at the same column and row in the preceding frame and (2) the second new brightness of the pixel, and (x, y, z−1) are the coordinates of the adjacent pixel at the same column and row in the preceding frame.


In essence, the difference in brightness of the current pixel and the adjacent pixel at the same column and row in the preceding frame is determined. If the difference is less than 2σ, then the average brightness of the current pixel and the adjacent pixel is set as the third new brightness of the pixel. Otherwise, the third new brightness of the pixel is set equal to the second new brightness of the pixel determined in step 404. The third new brightness is the final modified brightness of the pixel.


In step 408, it is determined if the current pixel is the last pixel in the current frame. If not, then step 408 is followed by step 410 and method 400 is repeated until all the pixels in the current frame have been processed. Otherwise step 408 is followed by step 412, which ends method 400. In one embodiment, the pixels are processed in a right to left and then down raster sequence.


Various other adaptations and combinations of features of the embodiments disclosed are within the scope of the invention. For example, the methods described above may be implemented in hardware and may operate on a different color space. Numerous embodiments are encompassed by the following claims.

Claims
  • 1. A method for enhancing a video, comprising: applying brightness enhancement, comprising: determining an optimal gamma for a gamma correction to be applied to pixels in a first frame, the optimal gamma maximizing an entropy of the first frame; andapplying the gamma correction to the pixels in the first frame with the optimal gamma determined for the first frame.
  • 2. The method of claim 1, wherein: the entropy comprises:
  • 3. The method of claim 2, wherein said determining an optimal gamma comprises searching in a first range for the optimal gamma that maximizes the entropy of the first frame.
  • 4. The method of claim 3, wherein the first range comprises 0.65 to 1.2.
  • 5. The method of claim 3, wherein said applying brightness enhancement further comprises: determining a change in mean values of brightness between the first frame and a second frame;determining the optimal gamma for the gamma correction to be applied to pixels in the second frame, wherein said determining the optimal gamma comprises searching in a second range that is smaller than the first range for the optimal gamma that maximizes the entropy of the second frame; andapplying the gamma correction to the pixels in the second frame with the optimal gamma determined for the second frame.
  • 6. The method of claim 5, wherein the second range comprises: [γ0−Δm ·0.02,γ0+Δm·0.02],
  • 7. The method of claim 1, wherein: said applying brightness enhancement further comprises: determining parameters for a linear histogram stretch to be applied to the pixels in the first frame;applying the linear histogram stretch to the pixels in the first frame with the parameters determined for the first frame;determining a difference in histogram properties between the first frame and a second frame; andwhen the difference is less than a first threshold, applying the linear histogram stretch to pixels in the second frame with the parameters determined for the first frame;the method further comprises applying video noise reduction, comprising: applying a sigma row filter each pixel in each frame;applying a sigma column filter to each pixel; andapplying a sigma temporal filter to each pixel.
  • 8. The method of claim 7, wherein: the parameters for the linear histogram stretch comprises:
  • 9. The method of claim 8, further comprising: (5) when the difference in the histogram properties between the first frame and the second frame is greater than the first threshold: (a) determining the parameters a and b for the linear histogram stretch to be applied to the pixels in the second frame;(b) determining a change in the parameter a between the first frame and the second frame;(c) when the difference is less than a second threshold and the change is less than a third threshold, applying the linear histogram stretch to the pixels in the second frame with the parameters a and b determined for the second frame;(d) when the difference is greater than the second threshold or the change is greater than the third threshold, applying the linear histogram stretch to the pixels in the second frame with the parameters a and b determined for the first frame.
  • 10. The method of claim 9, wherein the linear histogram stretch further comprises stretching color components of the pixels.
  • 11. The method of claim 10, wherein said stretching color components comprises: Cr=a·Cr0,Cb=a·Cb0,
  • 12. The method of claim 8, wherein max and min are set so that a first percentage of a total number of pixels in the frame will be set to the maximum brightness and a second percentage of the total number of pixels in the frame will be set to the minimum brightness.
  • 13. The method of claim 7, wherein said applying a sigma row filter comprises. determining a difference in brightness between the pixel and an adjacent pixel located in the same row in the frame; andwhen the difference is less than a threshold, then setting a brightness of the pixel to the average of the brightness of the pixel and the adjacent pixel in the same row in the frame.
  • 14. The method of claim 13, wherein said applying a sigma column filter comprises: determining a difference in brightness between the pixel and an adjacent pixel located in the same column in the frame; andwhen the difference is less than the threshold, then setting the brightness of the pixel to the average of the brightness of the pixel and the adjacent pixel in the same column in the frame.
  • 15. The method of claim 14, wherein said applying a sigma temporal filter comprises: determining a difference in brightness between the pixel and an adjacent pixel located at the same row and column in a preceding frame; andwhen the difference is less than the threshold, then setting the brightness of the pixel to the average of the brightness of the pixel and the adjacent pixel located at the same row and column in the preceding frame.
  • 16. The method of claim 7, wherein said applying a sigma row filter comprises:
  • 17. The method of claim 16, wherein said applying a sigma column filter comprises:
  • 18. The method of claim 17, wherein said applying a sigma temporal filter comprises:
CROSS REFERENCE TO RELATED APPLICATION

This application is a division of U.S. patent application Ser. No. 10/895,875, filed on Jul. 20, 2004, and incorporated herein by reference.

Divisions (1)
Number Date Country
Parent 10895875 Jul 2004 US
Child 12334875 US