This invention relates to software for image inpainting and texture synthesis.
Damaged images can be repaired by professional artists using a technique called inpainting. Various software inpainting techniques have been developed to undetectably repair images like professional artists. Bertalmio et al. proposes a method based on information propagation. M. Bertalmio, G. Sapior, V. Caselles, and C. Ballester, “Image Inpainiting,” Computer Graphics, Proceedings of SIGGRAPH, pp. 417–424, New Orleans, July 2000. It takes the known gray values of the points in the boundary of the damaged areas and propagates these gray values to the damaged area along the direction which has a small gradient. Chan et al. proposes a method that repairs image by solving the Partial Differential Equation (PDE). T. Chan, A. Marquina, and P. Mulet, “High-Order Total Variation-Based Image Restoration,” SIAM Journal on Scientific Computing, pp. 503–516, Vol. 22, No. 2, 2000.
Many damaged images look fine after being repaired by the aforementioned inpainting methods. However, these methods have a common shortcoming: all of them cannot retrieve the texture information of the image. This shortcoming is not very obvious when only a small area of the image is damaged. When a large area of the image is damaged, the result looks blurry without texture information and can be easily detected by the human eyes.
Texture synthesis methods can also repair damaged images, such as non-parametric sampling that creates texture using a Markov random fields (MRF) model. In the MRF model, the conditional Probability Distribution Function (PDF) of a point is calculated by using the neighbor points. The information of the damaged point is duplicated from a point which has the same conditional probability distribution. Efros et al. proposes a method that duplicates the information point by point. A. Efros and T. Leung, “Texutre Synthesis by Non-parametric Sampling,” In Proceedings of International Conference on Computer Vision, 1999. Liang et al. proposes a method based on patches (i.e., blocks). L. Liang, C E Liu, Y. Xu, B. Guo, and H. Shum, “Real-Time Texture Synthesis by Patch-Based Sampling,” Microsoft Research Technical Report MSR-TR-2001-40, March 2001.
The aforementioned texture synthesis methods can repair pure texture images well. However, the actual images in practice (such as natural images) often do not have features with repetitive texture. Furthermore, the texture features of these images are complicated by their environment such as lighting. Repairing images using the aforementioned texture synthesis methods without addressing these problems will not produce a satisfactory result.
Thus, what is needed is a method for repairing damaged images that addresses the shortcomings of the conventional inpainting and texture synthesis methods.
Use of the same reference numbers in different figures indicates similar or identical elements.
In one embodiment of the invention, a method for generating texture includes (1) selecting a target patch to be filled in an image, (2) selecting a sample patch as a candidate for filling the target patch, (3) determining a first difference between a first area surrounding the target patch and a corresponding first area surrounding the sample patch, and a second difference between a second area surrounding the target patch and a corresponding second area surrounding the sample patch, (4) multiplying a larger of the first difference and the second difference with a first weight factor, and a smaller of the first difference and the second difference with a second weight factor, and (5) summing the weighted first difference and the weighted second difference as a distance between the target patch and the sample patch.
Problems of a Conventional Patched-based Sampling Algorithm
Liang et al. discloses a patch-based sampling algorithm for synthesizing textures from a sample. This sampling algorithm is hereafter explained in reference to
w*w1, w2*w, (1)
w1=W mod w, w2=H mod w, (2)
where W and H express the width and height of the target area, respectively, and mod is the function that calculates the residual of W or H divided by w.
Each target patch Bk includes a boundary zone EB
Sample texture 6 is divided into sample patches B(x,y) (only one is labeled for clarity), where (x,y) denotes the left-lowest point of the sample patch. Sample patches B(x,y) and target patches Bk have the same size. Each sample patch B(x,y) includes a boundary zone EB
The corresponding points in boundary zones EB
ψB={B(x,y)|d(EB
where d is the distance between boundary zones EB
where A is the number of corresponding points in boundary zones EB
After all the sample patches B(x,y) in sample texture 6 are compared with a target patch Bk, then a sample patch B(x,y) is randomly selected from set ψB and used to fill target patch Bk. If set ψB is empty, then a sample patch B(x,y) with the smallest distance is selected to fill target patch Bk. This process is repeated for each target patch Bk in target area 4 of image 2.
After filling in one target patch, the texture of that target patch becomes part of the known boundary zones of adjacent target patches. For example in
One disadvantage of Liang et al. is that formula 4 applies the same weight to all the areas that make up the boundary zones. For example, assuming a target patch having a boundary zone with known left and lower areas is compared with a sample patch having a boundary zone with a very similar or the same left area. The distance calculated may be smaller than the prescribed threshold because of the similarity between the left areas around the target and the sample patches even though the lower areas around the target and the sample patches are very different. When such a sample patch is used to fill the target patch, the lower portion of the sample patch may be greatly visible in the image. Furthermore, the selection of this sample patch will affect the subsequent target patches that are filled as the dissimilarity is propagated through subsequent matching operations.
Another disadvantage of Liang et al. is that it fails to compensate for asymmetrical lighting. Asymmetrical lighting in an image will give different gray values to the same texture. This makes it difficult to fill a target area with the proper texture because areas with similar gray values may have different textures while areas with the same texture may have different gray values. When a sample patch is pasted directly onto a target patch, then the sample patch may be visible in the image.
Improvement to the Patched-based Sampling Algorithm
In step 12, the computer optionally converts image 104 from a color image into a gray scale image. The computer can convert the color values into gray scale values as follows:
where I is the gray value of a point, and R, G, B are the color values of the point. This step may help to speed up the processing later described. Step 12 can be skipped if image 104 is a gray scale image from the start.
In step 14, the computer receives a target area 102 to be filled. Typically, target area 102 is designated by a user after the user visually inspects image 104.
In step 16, the computer divides target area 102 into target patches Bk with associated boundary zones EB
In step 18, the computer divides sample area 106 into target patches B(x,y) with associated boundary zones EB
In step 20, the computer selects a target patch Bk from target area 102 to be matched with a sample patch. In one embodiment, the computer selects the target patch in an inward spiral order.
In step 22, the computer selects a sample patch B(x,y) from sample area 106 to be compared with target patch Bk.
In steps 24 and 26, the computer determines the distance between the current target patch Bk and the current sample patch B(x,y). Specifically, in step 24, the computer determines the difference between the corresponding points in boundary zones EB
where dn is the difference of the nth pair of corresponding boundary areas in boundary zones EB
In step 26, the computer weighs the differences of the corresponding areas and then sums the weighted differences as follows:
where d is the distance between target patch Bk and sample patch B(x,y), di is the difference of the ith pair of corresponding boundary areas in a descending sequence, αi is the weight given to difference di, and n is the total number of corresponding boundary areas. The value of αi is determined by boundary width wE and patch size w (where patch size w is typically determined by the size of the smallest repeated unit of texture known as textone, and wE is typically
In one embodiment, value of αi is determined as follows:
where the sequence is the descending sequence of the distances from the biggest to the smallest. Equation 8 thus gives different weights to different boundary areas, and the boundary area with the biggest distance has the weight 1.
In step 28, the computer determines if the distance between the current target patch Bk and the current sample patch B(x,y) is less than a prescribed threshold. If so, then step 28 is followed by step 30. Otherwise, step 28 is followed by step 32.
In step 30, the computer puts the current sample patch B(x,y) in a set ψB, which contains all the sample patches that can be used to fill target patch Bk.
In step 32, the computer determines if all the orientations of the current sample patch B(x,y) have been compared with the current target patch Bk. This is because image 102 may have symmetrical areas, caused by reflection or other reasons, that can provide a good match for a target patch. Thus, different orientations of the current sample patch Bk are also compared for an acceptable match with the current target patch Bk. In one embodiment, the current sample patch B(x,y) is orthogonally rotated three times from its original orientation to see if any of the other orientations provide an acceptable match with the current target patch Bk. If all the orientations of the current sample patch B(x,y) have been tried, then step 32 is followed by step 36. Otherwise step 32 is followed by step 34.
In step 34, the computer rotates the current sample patch B(x,y). Step 34 is followed by step 24 and the newly rotated sample patch B(x,y) is compared with the current target patch Bk. Method 10 repeats this loop until all the orientations of the current sample patch B(x,y) have been compared to the current target patch Bk.
In step 36, the computer determines if the last sample patch B(x,y) in sample area 106 has been compared with the current target patch Bk. If so, then step 36 is followed by 38. Otherwise step 36 is followed by step 22 and another sample patch B(x,y) is selected. Method 10 thus repeats this loop until all the sample patches B(x,y) in sample area 106 have been compared to the current target patch Bk.
In step 38, the computer randomly selects a sample patch B(x,y) from set ψB to fill the current target patch Bk. If set ψB is empty, then the computer selects the sample patch B(x,y) with the smallest distance to fill the current target patch Bk.
In step 40, the computer adjusts the gray values of sample block B(x,y) to make them look natural with the boundary values of the current target patch Bk and at the same time keep their texture features. Assume an image g is the selected sample patch and an image f is the boundary around the current target patch. It is desired to generate a new sample patch u that has the same texture features as selected sample patch g (i.e., have the same color gradient) and has the same color at its boundary f.
In order to keep the gradient of the selected sample patch g, the new sample patch u should satisfy the functions:
where
is the gradient of the new sample patch u in x direction at point (x,y),
is the gradient of the new sample patch u in y direction at point (x,y),
is the gradient of the selected sample patch g in x direction at point (x,y), and
is the gradient of the selected sample patch g in y direction at point (x,y). Equation 9 can be rewritten as:
In order to make the new sample patch u have the same color at its boundary area f, the new sample patch u should satisfy the function:
u=f (11)
Equation 11 can be rewritten as:
u=f(u−f)=0(u−f)2=0. (12)
Equations 10 and 12 can be combined into a single equation as follows:
In other words, the new sample patch u should satisfy the equation 13 at point (x,y). Satisfying these conditions for the entire new sample patch u, equation 13 is rewritten as:
where Ω is the area to be pasted with the new sample patch u. If equation 14 is written in continuous form, it becomes:
As there is no solution of the new sample patch u that satisfies equation 15, the closest solution for the new sample patch u is determined by defining a function J(u) as follows:
where λ is the weight given between satisfying the boundary condition against satisfying the gradient condition, and d(x,y) is dxdy. Conventional minimizing methods, such as the Steepest Descent and Iteration, can be used to minimize function J(u).
In step 42, the computer fills the current target patch Bk with the adjusted sample patch B(x,y). Unlike Liang et al., where the boundary zone of the sample patch is also filled in to overlap with the known areas outside of the target patch and the areas derived from the filling of other target patches, the computer only fills in the selected sample patch B(x,y) without its boundary zone EB(x,y) into the current target patch Bk. As shown in
In step 44, the computer determines if the current target patch Bk is the last target patch in target area 102. If so, then step 44 is followed by step 46. Otherwise step 44 is followed by step 20 and another target patch Bk is selected. Method 10 thus repeats this loop until all the target patches Bk in target area 102 have been filled.
In step 46, the computer optionally converts image 102 from a gray scale image into a color image. In one embodiment, the computer imposes the color characteristics of the original color image 102 onto the gray scale image 102 using the method disclosed by Reinhard et al. E. Reinhard, M. Ashikhmin, B. Gooch, P. Shirley, “Color Transfer between Images,” IEEE Computer Graphics and Applications, Vol. 21, No. 5, September/October 2001.
As described above, method 10 provides many improvements over the conventional patched-based sampling algorithm disclosed by Liang et al. First, method 10 weighs the different areas of the boundary zones differently when calculating the distance between a sample patch and a target patch. Second, method 10 adjusts the gray values of the sample patch to better match the target patch. Third, method 10 compares the sample patch at various orientations to better match the target patch. Fourth, method 10 converts color images to gray scale images to improve processing speed.
Various other adaptations and combinations of features of the embodiments disclosed are within the scope of the invention. Numerous embodiments are encompassed by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5974194 | Hirani et al. | Oct 1999 | A |
6587592 | Georgiev et al. | Jul 2003 | B1 |
6658166 | Zlotnick et al. | Dec 2003 | B1 |
Number | Date | Country | |
---|---|---|---|
20050146539 A1 | Jul 2005 | US |