Multiresolutional critical point filter and image matching using the same

Information

  • Patent Application
  • 20020064309
  • Publication Number
    20020064309
  • Date Filed
    December 10, 2001
    23 years ago
  • Date Published
    May 30, 2002
    22 years ago
Abstract
A multiresolutional filter called a critical point filter is introduced. This filter extracts a maximum, a minimum, and two types of saddle points of pixel intensity for every 2×2 (horizontal×vertical) pixels so that an image of a lower level of resolution is newly generated for every type of a critical point. Using this multiresolutional filter, a source image and a destination image are hierarchized, and source hierarchical images and destination hierarchical images are matched using image characteristics recognized through a filtering operation.
Description


BACKGROUND OF THE INVENTION

[0001] 1. Field of the Invention


[0002] This invention relates to a multiresolutional filter which generates hierarchical images. In particular, this invention relates to a method for generating an image having a lower resolution using a multiresolutional filter, and an image matching method capable of using this filtering method.


[0003] 2. Description of the Related Art


[0004] Automatic matching of two images, that is, correspondence between image regions or pixels, has been one of the most important and difficult themes of computer vision and computer graphics. For instance, once the images of an object from different view angles are matched, they can be used as the base for generating other views. When the matching of right-eye and left-eye images is computed, the result can immediately be used for stereo photogrammetry. When a model facial image is matched with another facial image, it can be used to extract characteristic facial parts such as the eyes, the nose, and the mouth. When two images of, for example, a man and a cat are matched exactly, all the in-between images can be generated and hence morphing can be done fully automatically.


[0005] However, in the existing methods, the correspondence of the points of the two images must generally be specified manually, which is a tedious process. In order to solve this problem, various methods for automatically detecting correspondence of points have been proposed. For instance, application of an epipolar line has been suggested to reduce the number of candidate pairs of points, but the complexity is high. To reduce the complexity, the coordinate values of a point in the left-eye image are usually assumed to be close to those of the corresponding point in the right-eye image. Providing such restriction, however, makes it very difficult to simultaneously match global and local characteristics.


[0006] In volume rendering, a series of cross-sectional images are used for constituting voxels. In such a case, conventionally, it is assumed that a pixel in the upper cross-sectional image correspond to the pixel that occupies the same position in the lower cross section, and this pair of pixels is used for the interpolation. Using this very simple method, volume rendering tends to suffer from unclear reconstruction of objects when the distance between consecutive cross sections is long and the shape of the cross sections of the objects thus changes widely.


[0007] A great number of image matching algorithms such as the stereo photogrammetry methods use edge detection. In such a method, however, the resulting matched pairs of points are sparse. To fill the gaps between the matched points, the disparity values are interpolated. In general, all edge detectors suffer from the problem of judging whether a change in the pixel intensity in a local window they use really suggests the existence of an edge. These edge detectors suffer from noises because all edge detectors are high pass filters by nature and hence detect noises at the same time.


[0008] Optical flow is another known method. Given two images, optical flow detects the motion of objects (rigid bodies) in the images. It assumes that the intensity of each pixel of the objects does not change and computes the motion vector (u,v) of each pixel together with some additional conditions such as the smoothness of the vector field of (u,v). Optical flow, however, cannot detect the global correspondence between images because it concerns only the local change of pixel intensity and systematic errors are conspicuous when the displacements are large.


[0009] To recognize the global structures, a great number of multiresolutional filters have been proposed. They are classified into two groups: linear filters and nonlinear filters. An example of the former is a wavelet. However, the linear filters are not useful when used for image matching, because the information of the pixel intensity of extrema as well as their locations are blurred. FIGS. 1(a) and 1(b) show the result of the application of an averaging filter to the facial images in FIGS. 19(a) and 19(b), respectively. As shown in these drawings, the pixel intensity of extrema is reduced through averaging while the locations are undesirably shifted due to the influence of averaging. As a result, the information of the locations of the eyes (minima of the intensity) is ambiguous at this coarse level of resolution and hence it is impossible to compute the correct matching at this level of resolution. Therefore, although a coarse level is prepared for the purpose of global matching, the obtained global matching does not correctly match the true characteristics of the images (eyes, i.e., the minima) correctly. Even when the eyes appear clearly at the finer level of resolution, it is too late to take back the errors introduced in the global matching. By smoothing the input images, stereo information in textured regions is also filtered out as pointed out.


[0010] On the other hand, 1D sieve operators have become available as nonlinear filters which can be used for morphological operations. 1D sieve operators smooth out the images while preserving scale-space causality by choosing the minimum (or the maximum) inside a window of a certain size. The resulting image is of the same size as the original, but is simpler because small undulations are removed. Although this operator may be classified as “a multiresolutional filter” in a broad sense that it reduces image information, it is not a multiresolutional filter in a normal sense as it does not put images into hierarchy while changing the resolution of the images as wavelets do. This operator thus cannot be utilized for detection of correspondence between images.



SUMMARY OF THE INVENTION

[0011] In view of the above, the following problems are presented.


[0012] 1. Image processing methods have rarely been available for accurately identifying the characteristics of an image through relatively simple processing. In particular, effective proposals have been scarcely made in connection with a method for extracting characteristics of an image while preserving information, such as the pixel value or location of a characteristic point.


[0013] 2. Automatic detection of a corresponding point based on the characteristics of an image generally has had problems including complex processing and low noise durability. In addition, various restrictions have been necessarily imposed in processing, and it has been difficult to obtain a matching which satisfies global and local characteristics at the same time.


[0014] 3. Although a multiresolutional filter is introduced for recognition of the global structure or characteristics of an image, in the case of a linear filter, information regarding the intensity and location of a pixel becomes blurred. As a result, corresponding points can hardly be recognized with sufficient accuracy. In addition, the 1D sieve operator, which is a non-linear filter, does not hierarchize an image, and cannot bemused for detection of a corresponding point between images.


[0015] 4. With the above problems, extensive manual labor has been inevitably required in processing in order to accurately obtain corresponding points.


[0016] The present invention has been conceived to overcome the above problems, and aims to provide techniques for allowing accurate recognition of image characteristics in the image processing field.


[0017] In one aspect of the present invention, a new multiresolutional image filter is proposed. This filter is called a critical point filter as it extracts a critical point from an image. A critical point stands for a point having a certain characteristic in an image, including a maximum, where a pixel value (that is, an arbitrary value for an image or a pixel, such as a color number or the intensity) becomes maximum in a certain region, a minimum, where it becomes minimum, and a saddle point, where it becomes maximum for one direction and minimum for another. A critical point may be based on a topological concept, but it may possess any other characteristics. Selection of criteria for a critical point is not an essential matter in this invention.


[0018] In the above aspect, image processing using a multiresolutional filter is carried out. In a detection step, a two dimensional search is performed on a first image to detect a critical point. In a following generation step, the detected critical point is extracted for generation of a second image having a lower resolution than that of the first image. The second image inherits critical points from the first image. The second image, having a lower resolution than the first image, is preferably used for recognition of global characteristics of an image.


[0019] Another aspect of the invention relates to an image matching method using a critical point filter. In this aspect, source and destination images are matched. The terms “a source image” and “a destination image” are determined only for a discriminating purpose, and there is no essential difference between them.


[0020] In a first step of this aspect, a critical point filter is applied to a source image to generate a series of source hierarchical images each having a different resolution. In a second step, a critical point filter is applied to a destination image to generate a series of destination hierarchical images. Source and destination hierarchical images stand for a group of images which are obtained by hierarchizing source and destination images, respectively, and each consist of two or more images. In a third step, matching between source and destination hierarchical images is computed. In this aspect, image characteristics concerning a critical point are extracted and/or clarified using a multiresolutional filter. This facilitates matching. According to this aspect, matching may be totally unconstrained.


[0021] Still another aspect of the present invention relates to matching source and destination images. In this aspect, an evaluation equation is set beforehand for each of a plurality of matching evaluation items; these equations are combined into a combined evaluation equation; and an optimal matching is detected while paying attention to the neighborhood of an extreme of the combined evaluation equation. A combined evaluation equation may be defined as a linear combination or a sum of these evaluation equations, at least one of which has been multiplied by a coefficient parameter. In such a case, the parameter may be determined by detecting the neighborhood of an extreme of the combined evaluation equation or any of the evaluation equation. The above description used the term “the neighborhood of an extreme,” because some error is tolerable as it does not seriously affect the present invention.


[0022] Since an extreme itself depends on the parameter, it becomes possible to determine an optical parameter based on the behavior of an extreme. Automatic determination of a parameter, which originally accompanies difficulties in tuning, is achieved.







BRIEF DESCRIPTION OF THE DRAWINGS

[0023] The above and the other objects, features, and advantages, will become further apparent from the following description of the preferred embodiment taken in conjunction with the accompanying drawings wherein:


[0024]
FIG. 1(a) shows an image obtained as the result of the application of an averaging filter to one human facial image;


[0025]
FIG. 1(b) shows an image obtained as the result of the application of an averaging filter to another human facial image;


[0026]
FIG. 1(c) shows an image of one human face at p(5,0) obtained in a preferred embodiment;


[0027]
FIG. 1(d) shows an image of another human face at p(5,0) obtained in a preferred embodiment;


[0028]
FIG. 1(e) shows an image of one human face at p(5,1) obtained in a preferred embodiment;


[0029]
FIG. 1(f) shows an image of one human face at p(5,1) obtained in a preferred embodiment;


[0030]
FIG. 1(g) shows an image of one human face at p(5,2) obtained in a preferred embodiment;


[0031]
FIG. 1(h) shows an image of one human face at p(5,2) obtained in a preferred embodiment;


[0032]
FIG. 1(i) shows an image of one human face at p(5,3) obtained in a preferred embodiment;


[0033]
FIG. 1(j) shows an image of one human face at p(5,3) obtained in a preferred embodiment;


[0034]
FIG. 2(R) shows an original quadrilateral;


[0035]
FIG. 2(A) shows an inherited quadrilateral;


[0036]
FIG. 2(B) shows an inherited quadrilateral;


[0037]
FIG. 2(C) shows an inherited quadrilateral;


[0038]
FIG. 2(D) shows an inherited quadrilateral;


[0039]
FIG. 2(E) shows an inherited quadrilateral;


[0040]
FIG. 3 is a diagram showing the relationship between a source image and a destination image and that between the m-th level and the (m−1)th level, using a quadrilateral;


[0041]
FIG. 4 shows the relationship between a parameter η and energy Cf;


[0042]
FIG. 5(a) is a diagram illustrating determination of whether or not the mapping for a certain point satisfies Bijectivity conditions through outer product computation;


[0043]
FIG. 5(b) is a diagram illustrating determination of whether or not the mapping for a certain point satisfies Bijectivity conditions through outer product computation;


[0044]
FIG. 6 is a flowchart of the entire procedure of a preferred embodiment;


[0045]
FIG. 7 is a flowchart showing the details of the process at S1 in FIG. 6;


[0046]
FIG. 8 is a flowchart showing the details of the process at S10 in FIG. 7;


[0047]
FIG. 9 is a diagram showing correspondence between partial images of the m-th and (m−1)th levels of resolution;


[0048]
FIG. 10 is a diagram showing source hierarchical images generated in the embodiment;


[0049]
FIG. 11 is a flowchart of a preparation procedure for S2 in FIG. 6;


[0050]
FIG. 12 is a flowchart showing the details of the process at S2 in FIG. 6;


[0051]
FIG. 13 is a diagram showing the way a submapping is determined at the 0-th level;


[0052]
FIG. 14 is a diagram showing the way a submapping is determined at the first level;


[0053]
FIG. 15 is a flowchart showing the details of the process at S21 in FIG. 12;


[0054]
FIG. 16 is a diagram showing the behavior of energy C(m,s)f corresponding to f(m,s) (α=iΔλ) which has been obtained for a certain f(m,s) while changing λ;


[0055]
FIG. 17 is a diagram showing the behavior of energy C(n)f corresponding to f(n)(η=iΔη) (i-0, 1, . . . ) which has been obtained while changing η;


[0056]
FIG. 18(a) shows a left-eye image of an object;


[0057]
FIG. 18(b) shows a right-eye image of an object;


[0058]
FIG. 18(c) shows an interpolation image of an object, generated in the preferred embodiment;


[0059]
FIG. 19(a) shows an image of one human face;


[0060]
FIG. 19(b) shows an image of another human face;


[0061]
FIG. 19(c) shows an image of the human faces shown in FIGS. 19(a) and 19(b) superimposed on each other;


[0062]
FIG. 19(d) shows a morphing image generated in the preferred embodiment;


[0063]
FIG. 20(a) shows the face of a cat;


[0064]
FIG. 20(b) shows a morphing image of the face of a cat and a human face;


[0065]
FIG. 21(a) shows a left-eye image including many objects;


[0066]
FIG. 21(b) shows a right-eye image including many objects;


[0067]
FIG. 21(c) shows an interpolation image including many objects, generated in the preferred embodiment;


[0068]
FIG. 22(a) shows an MRI source image;


[0069]
FIG. 22(b) shows an MRI destination image;


[0070]
FIG. 22(c) shows an interpolation image generated in the preferred embodiment; and


[0071]
FIG. 22(d) shows a volume rendering image generated based on an interpolation image.







DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0072] Elemental techniques applied in a preferred embodiment will be first described in [1]. A concrete description of a processing procedure will then be given in [2], and experimental results will be reported in [3].


[0073] [1] Detailed Description of Elementary Techniques


[0074] [1.1] Introduction


[0075] Using a set of new multiresolutional filters called critical point filters, image matching is accurately computed. There is no need for any prior knowledge of the objects. The matching of the images is computed at each resolution while going down the resolution hierarchy generally from the coarse level to the fine level. Parameters necessary for the computation are set completely automatically by dynamical computation analogous to human visual systems. There is no need of manually specifying correspondence of points between images.


[0076] This embodiment can be applied to, for instance, completely automated morphing, object recognition, stereo photogrammetry, volume rendering, smooth generation of motion images from a small number of frames. When applied to morphing, given images can be automatically transformed. When applied to volume rendering, intermediate images between cross sections can be accurately reconstructed, even when the distance between them is long and the cross sections vary widely in shape.


[0077] [1.2] The Hierarchy of the Critical point filters


[0078] Multiresolutional filters according to this embodiment preserve the intensity and the locations of each critical point of the images while reducing the resolution at the same time. Let the width of the image be N and the height be M. For simplicity, N=M=2n is assumed wherein n is a positive integer. An interval [0,N]⊂R is denoted by I. A pixel of the image at the location (i,j) is denoted by p(i,j) wherein i,j∈I.


[0079] A multiresolution hierarchy is introduced. Hierarchized image groups are constructed using a multiresolutional filter. The multiresolutional filter carries out a two dimensional search on an original image and detects critical points. It then extracts the critical points to construct another image having a lower resolution. The size of each of the respective images of the m-th level can be denoted as 2m×2m (0≦m≦n). The critical point filter constructs the following four new hierarchical images recursively.
1p(i,j)(m,0)=min(min(p(2i,2j)(m+1,0),p(2i,2j+1)(m+1,0)),min(p(2i+1,2j)(m+1,0),p(2i+1,2j+1)(m+1,0)))p(i,j)(m,1)=max(min(p(2i,2j)(m+1,1),p(2i,2j+1)(m+1,1)),min(p(2i+1,2j)(m+1,1),p(2i+1,2j+1)(m+1,1)))p(i,j)(m,2)=min(max(p(2i,2j)(m+1,2),p(2i,2j+1)(m+1,2)),max(p(2i+1,2j)(m+1,2),p(2i+1,2j+1)(m+1,2)))p(i,j)(m,3)=max(max(p(2i,2j)(m+1,3),p(2i,2j+1)(m+1,3)),max(p(2i+1,2j)(m+1,3),p(2i+1,2j+1)(m+1,3)))(1)


[0080] wherein
2p(i,j)(n,0)=p(i,j)(n,1)=p(i,j)(n,2)=p(i,j)(n,3)=p(i,j).(2)


[0081] These four images are hereinafter referred to as subimages. When minx≦t≦x+1, maxx≦t≦x+1 are abbreviated to α and β, the subimages can be abbreviated to the followings. P(m,0)=α(x)α(y)p(m+1,0) P(m,1)=α(x)β(y)p(m+1,1) P(m,2)=α(x)α(y)p(m+1,2) P(m,3)=α(x)β(y)p(m+1,3)


[0082] That is, they are analogous to the tensor products of α and β. Subimages correspond to different kinds of critical points, respectively. As is apparent from the above equations, a critical point filter detects a critical point of the original image for every block constituting 2×2 pixels. In this detection, a point having a maximum pixel value and a point having a minimum pixel value are searched with respect to two directions, i.e., vertical and horizontal directions, in each block. Although pixel intensity is employed as a pixel value in this embodiment, various other values relative to an image may be employed. A pixel having maximum pixel values for the two directions, one having minimum pixel values for the two directions, one having a minimum pixel value for one direction and a maximum pixel value for the other direction are detected as a maximum, a minimum, and a saddle point, respectively.


[0083] A critical point filter makes a one pixel image of a critical point detected inside each of the respective blocks represent the image (four pixels here) of the block, to thereby reduce resolution of the image. From the singularity theoretical point of view, α(x) α(y) preserves the minima, β(x) β(y) preserves the maxima, α(x) β(y) and β(x) α(y) preserve the saddle points.


[0084] In operation, at the beginning, critical point filtering is applied separately to a source image and a destination image so as to generate a series of image groups, i.e., source hierarchical images and destination hierarchical images. Four source hierarchical images and four destination hierarchical images are generated corresponding to four critical point types. Then, source hierarchical images and destination hierarchical images are matched. The minima are first matched using p(m,0) , the saddle points are then matched using pm,1) based on the previous matching result for the minima, other saddle points are matched using p(m,2), and finally the maxima are matched using p(m,3).


[0085] FIGS. 1(c) and 1(d) show the subimages p(5,0) of the images in FIGS. 1(a) and 1(b), respectively. Similarly, FIGS. 1(e) and 1(f) show the subimages p(5,1); FIGS. 1(g) and 1(h) show the subimages p(5,2); and FIGS. 1(i) and 1(j) show the subimages p(5,3). Characteristics in images can be easily matched using subimages. The eyes can be matched by p(5,0) because the eyes are minima of pixel intensity in a face. The mouths can be matched by p(5,1) because the months have low intensity in the horizontal direction. Vertical lines on the both sides of the necks become clear by p(5,2). The ears and bright parts of cheeks become clear by p(5,3) because these are maxima of pixel intensity.


[0086] As described above, characteristics of an image can be extracted or distinguished a critical point filter. Comparison between characteristics of an image shot by a camera and of several objects stored beforehand enables to recognize an object shot by a camera.


[0087] [1.3] Computation of Mapping Between Images


[0088] The pixel of the source image at the location (i,j) is denoted by p(n)(i,j) and that of the destination image at (k,l) is denoted by g(n)(i,j) where i, j, k, l∈I. The energy of the mapping between the images (described later) is then defined. This energy is determined by the difference in the intensity of the pixel of the source image and its corresponding pixel of the destination image and by the smoothness of the mapping. First, the mapping f(m,0): p(m,0)→q(m,0) between p(m,0) and q(m,0) with the minimum energy is computed. Based on f(m,0), the mapping f(m,1) between p(m,1) and q(m,1) with the minimum energy is computed. This process continues until f(m,3) between p(m,3) and q(m,3) is computed. Each f(m,i) (i=0, 1, 2, . . . ) is referred to as a submapping. The order of i will be changed as follows in computing f(m,i) for reasons to be described later.
3f(m,i):p(m,σ(i))->q(m,σ(i))(3)


[0089] wherein σ(i)∈{0, 1, 2, 3}.


[0090] [1. 3. 1] Bijectivity


[0091] When the matching between a source image and a destination image is expressed by means of a mapping, that mapping should satisfy the Bijectivity Conditions (BC) between the two images because respective images should be connected satisfying both surjection and injection as there is no conceptual supremacy existing between these images. It should be noted, however, that the mappings to be constructed here are the digital version of the bijection. In this embodiment, a pixel is identified with a grip point.


[0092] The mapping of the source subimage (a subimage of a source image) to the destination subimage (a subimage of a destination image) is represented by f(m,s): I/2n-m×I/2n-m→I/2n-m×I/2n-m(s=0, 1, . . . ), wherein f(m,s)(i,j)=(k,l) means that p(m,s)(i,j) of the source image is mapped to q(m,s)(k,l) of the destination image. For simplicity, a pixel q(k,l) where f(i,j)=(k,l) holds is denoted by qf(i,j).


[0093] The definition of Bijectivity is not trivial when the data sets are discrete as image pixels (grip points) in this embodiment. The bijection in this case will be defined as follows, wherein i, i, j, j , k, and 1 are integers. First, each square
4p(i,j)(m,s)p(i+1,j)(m,s)p(i+1,j+1)(m,s)p(i,j+1)(m,s)(4)


[0094] on the source image plane denoted by R is considered, wherein i=0, . . . 2m−1, and j=0, . . . , 2m−1. The edges of R are directed as the following.
5p(i,j)(m,s)p(i+1,j)(m,s),p(i+1,j)(m,s)p(i+1,j+1)(m,s),p(i+1,j+1)(m,s)p(i,j+1)(m,s)andp(i,j+1)(m,s)p(i,j)(m,s)(5)


[0095] It is necessary that the square be mapped by f to a quadrilateral on the destination image plane. The quadrilateral
6qf(i,j)(m,s)qf(i+1,j)(m,s)qf(i+1,j+1)(m,s)qf(i+1,j+1)(m,s)(6)


[0096] denoted by f(m,n)(R) should satisfy the following Bijectivity conditions.


[0097] 1. The edges of the quadrilateral f(m,s) should not intersect one another.


[0098] 2. The orientation of the edges of f(m,s)(R) should be the same as that of R (clockwise for FIG. 2). 3. For relaxation, mappings that are retractions are allowed.


[0099] Without relaxation, there would be no mappings which completely satisfy the BC other than an identity mapping. In this embodiment, the length of one edge of f(m,s)(R) may be zero. That is, f(m,s)(R) may be a triangle here. However, it must not be a point or a line segment having no area in this embodiment. If FIG. 2(R) is the original quadrilateral, FIGS. 2(A) and 2(D) satisfy BC, but FIGS. 2(B), 2(C), and 2(E) do not.


[0100] In implementation, the following condition may be imposed to ease insuring that the mapping is surjection; each pixel on the boundary of the source image is mapped to the pixel that occupies the same location at the destination image, i.e., f(i,j)=(i,j) (on the four lines of i=0, i=2m−1, j=0, j=2m−1). This condition is hereinafter abbreviated to an additional condition.


[0101] [1. 3. 2] The Energy of Mapping


[0102] [1. 3. 2. 1] Cost related to the pixel intensity


[0103] The energy of the mapping f is defined. A mapping whose energy is minimum will be searched. The energy is determined mainly by the difference in the intensity of the pixel of the source image and its corresponding pixel of the destination image. That is, the energy C(m,s)(i,j) of the mapping f(m,s) at (i,j) is determined as
7C(i,j)(m,s)=&LeftBracketingBar;V(p(i,j)(m,s))-V(qf(i,j)(m,s))&RightBracketingBar;2(7)


[0104] wherein V(p(m,s)(i,j)) and V(q(m,s)f(i,j)) are the intensity values of the pixels p(m,s)(i,j) and q(m,s)f(i,j), respectively. The total energy C(m,s) of f is a matching evaluation equation, and can be defined as the sum of C(m,s)(i,j); i.e.,
8Cf(m,s)=i=0i=2m-1j=0j=2m-1C(i,j)(m,s).(8)


[0105] [1.3.2.2.] Cost related to the locations of the pixel for smooth mapping


[0106] To obtain smooth mappings, another energy Df of the mappings is introduced. This energy is determined by the locations of p(m,s)(i,j) and q(m,s)f(i,j) (i=0, . . . , 2m−1, j=0, . . . , 2m−1), regardless of the intensity of the pixels. The energy D(m,s)(i,j) of the mapping f(m,s) at (i,j) is determined as
9D(i,j)(m,s)=ηE0(i,j)(m,s)+E1(i,j)(m,s)(9)


[0107] wherein coefficient parameter η0 is a real number, and
10E0(i,j)(m,s)=&LeftDoubleBracketingBar;(i,j)-f(m,s)(i,j)&RightDoubleBracketingBar;2;(10)E1(i,j)(m,s)=i=i-1ij=j-1j&LeftDoubleBracketingBar;(f(m,s)(i,j)-(i,j))-(f(m,s)(i,j)-(i,j))&RightDoubleBracketingBar;2/4(11)


[0108] wherein


||(x,y)||={square root}{square root over (x2+y2)}  (12)


[0109] and f(i,j) is defined to be zero for i<0 and j<0. E0 is determined by the distance between (i,j) and f(i,j). E0 prevents a pixel being mapped to a pixel too distant. It will be replaced later by another energy function. E1 ensures the smoothness of the mapping. It represents the distance between the displacement of p(i,j) and the displacements of its neighbors. Based on the above, another equation for evaluating the matching, or the energy Df, is determined by
11Df(m,s)=i=0i=2m-1j=0j=2m-1D(i,j)(m,s).(13)


[0110] [1.3.2.3] Total energy of the mapping


[0111] The total energy of the mapping, i.e., a combined evaluation equation is defined as λC(m,s)f+D(m,s)f, wherein λ≧0 is a real number. The goal is to detect an extreme of the combined evaluation equation, that is, to find a mapping that gives the minimum energy denoted by
12minfλCf(m,s)+Df(m,s).(14)


[0112] Note that the mapping is an identity mapping (i.e., f(m,s) (i,j)=(i,j) for all i=0, . . . , 2m−1 and j=0, . . . , 2m−1) when λ=0 and η=0. As described later, the mapping will be gradually modified or transformed from an identity mapping because the case of λ=0 and η=0 is initially evaluated in this embodiment. If a combined evaluation equation is assumed as C(m,s)f+λD(m,s)f, with the position of λ being changed, the equation where λ=0 and η=0 is denoted simply by C(m,s)f and pixels would be randomly corresponded to each other only because their pixel intensities are close. This makes the mapping totally meaningless. Modifying mappings based on such a meaningless mapping makes no sense. Thus, a coefficient parameter is determined such that an identity mapping is initially selected for the evaluation as the best mapping.


[0113] Similar to this embodiment, optical flow considers the difference in the pixel intensity and smoothness. However, it cannot be used for image transformation because it takes into account only local movement of an object. Using a critical point filter according to this embodiment, global correspondence can be detected.


[0114] [1.3.3] Determining the Mapping with Multiresolution


[0115] A mapping fmin that gives the minimum energy and satisfies BC is searched using the multiresolution hierarchy. The mappings between the source and destination images at each level of the resolution is computed. Starting from the coarsest image of the top of the resolution hierarchy, the mapping is determined at each level, while considering the mapping at other levels. The number of candidate mappings at each level is constrained by using the mappings at an upper, i.e., coarser, level of the hierarchy. To be more specific, a mapping at a certain level is determined using the mapping obtained at the coarser level by one.


[0116] When
13(i,j)=([i2],[j2])(15)


[0117] holds, p(m−1,s)(i′,j′) and q(m−1,s)(i′,j′) are respectively called the parent of p(m,s)(i,j) and q(m,s)(i,j). Note that [x] is the largest integer that does not exceed x. Conversely, p(m,s)(i,j) and q(m,s)(i,j) are called the child of p(m−1,s)(i,j) and q(m−1,s)(i,j), respectively. The function parent (i,j) is defined as
14parent(i,j)=([i2],[j2]).(16)


[0118] A mapping f(m,s) between p(m,s)(i,j) and q(m,s)(k,l) is then determined by computing and finding the minimum energy. The value f(m,s) (i,j)=(k,l) is determined as follows using f(m−1,s) (m=1, 2, . . . ,n). Imposing a condition that q(m,s)(k,j) should be inside a quadrilateral defined below, the mappings which are thought to be reasonable or natural are selected from among the mappings which satisfy the BC.
15qg(m,s)(m,s)(i-1,j-1)qg(m,s)(m,s)(i-1,j+1)qg(m,s)(m,s)(i+1,j+1)qg(m,s)(m,s)(i+1,j-1)(17)


[0119] wherein


q(m,s)(i,j)=f(m−1,s)(parent(i,j))+f(m−1,s)(parent(i,j)+(1,1))   (18)


[0120] The quadrilateral defined above is hereinafter referred to as the inherited quadrilateral of p(m,s)(i,j). Inside the inherited quadrilateral, the pixel that minimizes the energy is searched.


[0121] The above description is illustrated in FIG. 3, wherein the pixels A, B, C, and D of the source image are mapped to A B, C, and D of the destination image, respectively, at the (m−1l)th level in the hierarchy. The pixel p(m,s)(i,j) should be mapped to the pixel q(m,s,)f(m)(i,j) that exists in the interior of the inherited quadrilateral A′B′C′D′. With the above consideration, bridging from the mapping at the (m−1)th level to that at the m-th level is achieved.


[0122] The energy E0 defined above is now replaced by


E0(i,j)=||fm,0)(i,j)−q(m)(i,j)||2  (19)


[0123] for computing the submapping f(m,0), and


E0(i,j)=||f(m,s)(i,j)−f(m,s-1)(i,j)||2(1≦i)   (20)


[0124] for computing the submapping f(m,s) at the m-th level.


[0125] In this way, a mapping that maintains a low energy for all the submappings is obtained. Equation 20 makes submappings for different critical points associated to each other in the manner that subimages at the same level have high similarity. Equation 19 represents the distance between f(m,s)(i,j) and the location where (i,j) should be mapped when regarded as a part of a pixel at the (m−1)th level.


[0126] Where there is no pixel that satisfies the BC inside the inherited quadrilateral A′B′C′D′, the following measures are employed. First, pixels whose distance from the boundary of A′B′C′D′ is L (at first, L=1) are examined. Among them, the one with the minimum energy satisfying the BC is chosen as the value of f(m,s) (i,j). L is increased until such a pixel is found or L reaches its upper bound L(m)max. L(n)max is fixed for each level m. If no such a pixel is found at all, the third condition of the BC is abandoned temporarily and such mappings that cause the quadrilateral to become a point or a line are permitted for determining f(m,s)(i,j). If such a pixel is still not found, the first and the second conditions of the BC are next abandoned.


[0127] Multiresolution approximation is essential for determining the global correspondence of the images while avoiding the mapping being affected by small details of the images. Without the multiresolution approximation, it is impossible to detect a correspondence between pixels whose distance is large and hence the size of an image is limited to be very small, and only tiny changes in this images can be handled. Moreover, imposing smoothness on the mapping prevents finding the correspondence of such pixels because the energy of the mapping from a pixel to a pixel at a distance is high. The multiresolution approximation enables finding the appropriate correspondence of such pixels because the distance between them is small at the upper (coarse) level of the hierarchy of the resolution.


[0128] [1.4] Automatic Determination of the Optimal Parameter Values


[0129] One of the main deficiencies of existing image matching techniques lies in the difficulty of parameter tuning. In most cases, the tuning is done manually and it has been extremely difficult to choose the optimal value. According to this embodiment, the optimal parameter values can be obtained completely automatically.


[0130] The system of this embodiment includes two parameters, namely, λ and η. λ is the weight of the difference of the pixel intensity and η represents the stiffness of the mapping. These parameter values are increased step by step starting from 0. λ is first increased while fixing η at 0. As λ gets larger from 0 and the value of the combined evaluation equation (equation 14) is minimized, the value of C(m,s)f for each submapping generally becomes smaller. This basically means that the two images are matched better. However, if the value λ exceeds its optimal value, the following phenomena occur.


[0131] 1. Pixels which should not be corresponded are erroneously corresponded only because their intensities are close.


[0132] 2. Correspondence between images becomes inaccurate, and the mapping is spoiled.


[0133] 3. D(m,s)f in equation 14 is going to increase abruptly.


[0134] 4. The value of equation 14 is going to increase abruptly, so f(m,s) is so changed that the abrupt increase of D(m,s)f is suppressed. This resultantly causes an increase of C(m,s)f.


[0135] Therefore, a threshold value at which C(m,s)f is turned to increase is detected while tracing the minimum value of equation 14, and the detected value is determined as the optimal value of λ at η=0. Subsequently, the behavior of C(m,s) is observed while increasing η step by step, and η is automatically determined according to a procedure described later. Corresponding to η thus determined, λ is also determined.


[0136] This method is analogous to the focusing mechanism of human visual systems where the images of the left and right eye are matched while moving one eye and the eye is fixed when the objects are clearly recognized.


[0137] [1.4.1] Dynamic determination of λ


[0138] λ is increased from 0 step by step, and each time λ is changed a submapping is evaluated. As is shown by equation 14, the total energy is defined by λC(m,s)f+D(m,s)f. D(m,s)f in equation 9 represents the smoothness and theoretically becomes minimum for an identity mapping which is examined first. E0 and E1 increase when the mapping is changed because it becomes more distorted from the identity mapping. As E1 is an integer, 1 is the smallest step of D(m,s)f. Thus, it is impossible to reduce the total energy by changing the mapping, unless the new mapping reduces λC(m,s)(i,j) by one or more.


[0139] In this situation, it is now shown that condition, C(m,s)(i,j) decreases in normal cases as λ increases. The histogram of C(m,s)(i,j) is denoted as h(1), wherein h(1) is the number of pixels whose energy C(m,s)(i,j) is 12. In order that λ12≧1 holds, the case of 12=1/λ is now considered. When λ is slightly varied from λ1 to λ2, a number of pixels expressed by
16A=l=1λ21λ1h(l)l=1λ21λ1h(l)l=-λ2λ1h(l)1λ3/2λ=λ1λ2h(l)λ3/2λ(21)


[0140] move to a more stable state having the energy of
17Cf(m,s)-l2=Cf(m,s)-1λ(22)


[0141] Here, all the energy of these pixels are assumed to become zero. This equation represents that the value of C(m,s)f is changed by
18Cf(m,s)=-Aλ(23)Cf(m,s)ϑλ=-h(l)λ5/2(24)


[0142] As h(1)>0, C(m,s)f decreases in normal cases. When λ tries to go beyond the optimal value, however, the above phenomenon, that is, an increase of C(m,s)f occurrs. By detecting this phenomenon, the optimal value of λ is determined.


[0143] If
19h(l)=Hlk=Hλk/2(25)


[0144] is assumed wherein H>0 and k are constants,
20Cf(m,s)λ=-Hλ5/2+k/2(26)


[0145] holds. If k≠−3,
21Cf(m,s)=C+H(3/2+k/2)λ3/2+k/2(27)


[0146] holds. This is a generalized equation for C(m,s)f, wherein C is a constant.


[0147] In detection of the optimal value for λ, the number of pixels violating the BC may be checked for safety. The probability of violating the BC when determining a mapping for each pixel is assumed as p0. As
22Aλ=h(l)λ3/2(28)


[0148] holds, the number of pixels violating the BC increases at the rate
23B0=h(l)p0λ3/2andhence(29)B0λ3/2p0h(l)=1(30)


[0149] is a constant. If h(1)=H1k is assumed, for example


B0λ3/2+k/2=p0H   (31 )


[0150] is a constant. When λ goes beyond the optimal value, however, the above value increases abruptly. This phenomenon is detected to determine the optimal value of λ by checking whether or not B0λ3/2+k/2/2m exceeds an abnormal value B0thres. The increasing rate B1 of pixels violating the third condition of the BC is checked in the same way by checking whether or not B1λ3/2+k/2/2m exceeds an abnormal value B1thres. The reason why the factor 2m is introduced is described later. The system is not sensitive to the two threshold values. They are used to detect abnormally excessive distortion of the mapping in case it is failed to be detected through observation of the energy value C(m,s)f.


[0151] In the implementation, if λ exceeds 0.1 when computing the submapping f(m,s), the computation of f(m,s) is abandoned and the computation of f(m,s+1) is started. This is because computation of submappings is affected by the difference of mere 3 out of 256 levels in the pixel intensity when λ>0.1, and it is then difficult to get a correct result.


[0152] [1.4.2] The histogram h(1)


[0153] Checking of C(m,s)f does not depend on the histogram h(1), while checking of BC and the third condition of BC may depend on h(1). Actually, k is typically around 1 if (λ, C(m,s)f) is plotted. In the implementation, k=1 is used, and B0λ2 and B1λ2 are checked. If the true value of k is smaller than 1, B0λ1 and B1λ2 are not constants and increase gradually by the factor λ(1-k/2. When h(1) is a constant, the factor is, for instance, λ½. Such a difference can be absorbed by setting the threshold B0thres appropriately.


[0154] Here, model the source image by a circular object with its center at (x0,y0) and radius r given by
24p(i,j)={255rc((i-x0)2+(j-y0)2)((i-x0)2+(j-y0)2r)0otherwise(32)


[0155] and the destination image by
25q(i,j)={255rc((i-x1)2+(j-y1)2)((i-x1)2+(j-y1)2r)0otherwise(33)


[0156] with its center at (x1,y1) and radius r. Let c(x) be in the form of c(x)=xk. The histogram h(1) is then in the form of


h(l)αrlk (k≠0)   (34)


[0157] if the centers (x0,y0) and (x1,y1) are sufficiently far.


[0158] When k=1, the images represent objects with clear boundaries embedded in the backgrounds. These objects are darkest at their centers and brightest at their boundaries. When k=−1, the images represent objects with vague boundaries These objects are brightest at their centers, and darkest at their boundaries. It is generally considered that normal objects are usually in-between of these two types. Therefore, k being −1≦k≦1 can cover the most cases, and it is ensured that equation 27 is a decreasing function for this range.


[0159] As is known from equation 34, r is affected by the resolution of the images; i.e., r is proportional to 2m. This is the reason why the factor 2m is introduced in [1.4.1].


[0160] [1.4.3] Dynamic Determination of η


[0161] The other parameter η can also be determined automatically in the same manner. Initially, η=0 is set, and the final mapping f(n) and the energy C(n)f at the finest resolution are computed. Subsequently, after η is increased by a certain value Δη and the final mapping f(n) and the energy C(n)f at the finest resolution are again computed. This process is repeated until the optimal value is obtained. η represents the stiffness of the mapping because it is a weight of
26E0(i,j)(m,s)=&LeftDoubleBracketingBar;f(m,s)(i,j)-f(m,s-1)(i,j)&RightDoubleBracketingBar;2(35)


[0162] When η is zero, D(n)f is determined irrespective of the previous submapping, and the present submapping would be elastically deformed and become too distorted. When η is very large, D(n)f is almost completely determined by the previous submapping. The submappings are then very stiff, and the pixels are mapped to the same locations. The resulting mapping is therefore an identity mapping. When the value of η increases starting from 0, C(n)f gradually decreases as described later. When it goes beyond the optimal value, however, the energy begins to increase, as shown in FIG. 4. In FIG. 4, x-axis represents η, and y-axis represents Cf.


[0163] The optimum value of η with the minimum C(n)f is obtained in this manner. However, different from the case of λ, various elements affect the computation relative to η, as a result of which C(n)f varies while slightly fluctuating. This difference is caused because a submapping is re-computed only once for each step in the case of λ, whereas all the submappings must be re-computed in the case of η. Whether or not the obtained value of C(n)f is minimum cannot be judged immediately because of the fluctuation. The true minimum has to again be searched for in detail near the obtained candidate minimum value with a smaller interval.


[0164] [1.5] Supersampling


[0165] The range of f(m,s) can be expanded to R×R to increase the degree of freedom when deciding the correspondence between the pixels. (R stands for the set of real numbers.) In this case, the intensity values of the pixels of the destination image is interpolated to provide f(m,s) with the intensity values at non-integer points
27V(qf(m,s)(i,j)(m,s))(36)


[0166] That is, supersampling is performed. In the implementation, f(m,s) is allowed to take integer and half integer values, and
28V(q(i,j)+(0.5,0.5)(m,s))isgivenby(37)(V(q(i,j)(m,s))+V(q(i,j)+(1,1)(m,s)))/2.(38)


[0167] [1.6] Normalization of the Pixel Intensity of Each Image


[0168] When the source and destination images contain quite different objects, the raw pixel intensity may not be used to compute the mapping because a large difference in the pixel intensity causes excessively large energy C(m,s)f for the pixel intensity, and this makes it difficult to obtain accurate evaluation.


[0169] For instance, in a case where a human face and a face of a cat are matched as shown in FIG. 8, the face of the cat is furry and is a mixture of very bright pixels and very dark pixels. In this case, to compute the submappings of the subimages between the two facial images, it is preferred to normalize the subimages; i.e., the darkest pixel intensity should be set to 0, the brightest pixel intensity to 255, and the other pixel intensity values are linearly interpolated.


[0170] [1.7] Implementation


[0171] In the implementation, a heuristic where the computation proceeds linearly as the souse image is scanned is used. First, the value of f(m,s) is determined at the top leftmost pixel (i,j)=(0,0). The value of each f(m,s)(i,j) is then determined while i is increased by one at each step. When i reaches the width of the image, j is increased by one and i is set to zero. In this way, f(m,s)(i,j) is determined while scanning the source image. Once pixel correspondence is determined for all the points, one mapping f(m,s) is determined.


[0172] When a corresponding point qf(i,j) is determined for p(i,j), a corresponding point qf(i,j+1) of p(i,j+1) is next determined. The position of qf(i,j+1) is constrained by the position of qf(i,j) as the former satisfies the BC. Thus, in this system, a point whose corresponding point is determined earlier has higher priority. If the situation in which (0,0) is always given the highest priority continues, the mapping to be finally obtained might be unnecessarily biased. In order to avoid this, f(m,s) is determined as follows in this embodiment.


[0173] When (s mod 4) is 0, f(m,s) is determined starting from (0,0) while increasing i and j. When (s mod 4) is 1, it is determined starting from the top rightmost location while decreasing i and increasing j. When (s mod 4) is 2, it is determined starting from the bottom rightmost location while decreasing i and j. When (s mod 4) is 3, it is determined starting from the bottom leftmost location while increasing i and decreasing j. Since no conception of submappings, i.e., parameter s, exists in the finest n-th level, computation is done consecutively in two directions assuming s=0 and s=2.


[0174] In the actual implementation, the values of f(m,s)(i,j) that satisfy the BC are chosen from the candidates (k,l) by awarding a penalty to the candidates that violate the BC. The energy D(k,l) of the candidate that violates the third condition is multiplied by ø and that of a candidate that violates the first or second condition is multiplied by ψ. In the implementation, ø=2 and ψ=100000 are used.


[0175] For the above checking of the BC conditions, the following test is performed when determining (k,l)=f(m,s)(i,j). That is, for each grip point (k,l) in the inherited quadrilateral of f(m,s)(i,j), whether or not the z-component of the outer product of
29W=A×B(39)


[0176] is equal to or greater than 0 is checked, wherein
30A=qf(m,s)(i,j-1)(m,s)qf(m,s)(i+1,j-1)(m,s)(40)B=qf(m,s)(i,j-1)(m,s)q(k,l)(m,s)(41)


[0177] (here, the vectors are regarded as 3D vectors and the z-axis is defined in the orthogonal right hand coordinate system.) When W is negative, the candidate is awarded a penalty by multiplying D(m,s)(k,l) by ψ to avoid choosing, if possible.


[0178] FIGS. 5(a) and 5(b) show the reason why this condition is considered in detection. FIG. 5(a) represents a candidate without a penalty; FIG. 5(b) represents one with a penalty. When determining the mapping f(m,s) (i,j+1) for the adjacent pixel at (i,j+1), there is no pixel on the source image plane that satisfies the BC if the z-component of W is negative because q(m,s)(k,l) exceeds the boundary of the adjacent quadrilateral.


[0179] [1.7.1] The Order of Submappings


[0180] In the implementation, σ(0)=0, σ(1)=1, σ(2)=2, σ(3)=3, σ(4)=0 have been used at the even levels of resolution, and σ (0)=3, σ(1)=2, σ(2)=1, σ(3)=0, σ(4)=3 have been used at the odd levels of resolution. Thus, the submappings are shuffled. Note that submapping are originally of four types, and s may be any of 0 to 3. In the implementation, however, processing for s=4 is performed for the reason to be described later.


[0181] [1.8] Interpolations


[0182] After the mapping between the source and destination images is determined, the intensity values of the corresponding pixels are interpolated. In the implementation, trilinear interpolation is used. Suppose a square p(i,j)p(i+1,j)p(i,j+1)p(i+1,j+1) on the source image plane is mapped to a quadrilateral qf(i,j)qf(i+1,j)qf(i,j+1)qf(i+1,j+1) on the destination image plane. For simplicity, the distance between the image planes is assumed to be 1. The intermediate image pixels r(x,y,t) (0≦x≦N−1, 0≦y≦M−1) whose distance from the source image plane is t (0≦t≦1) are obtained as follows. First, the location of the pixel r(x,y,t), wherein x,y,t∈R, is determined by the equation
31(x,y)=(1-x)(1-y)(1-t)(i,j)+(1-x)(1-y)tf(i,j)+x(1-y)(1-t)(i+1,j)+x(1-y)tf(i+1,j)+(1-x)y(1-t)(i,j+1)+(1-x)ytf(i,j+1)+xy(1-t)(i+1,j+1)+xytf(i+1,j+1).(42)


[0183] The value of the pixel intensity at r(x,y,t) is then determined by the equation
32V(r(x,y,t))=(1-x)(1-y)(1-t)V(p(i,j))+(1-x)(1-y)tV(qf(i,j))+x(1-y)(1-t)V(p(i+1,j))+x(1-y)tV(qf(i+1,j))+(1-x)y(1-t)V(p(i,j+1))+(1-x)ytV(qf(i,j+1))+xy(1-t)V(p(i+1,j+1))+xytV(qf(i+1,j+1))(43)


[0184] wherein dx and dy are parameters varying from 0 to 1.


[0185] [1.9] Constraining the Mapping


[0186] In the above determination of the mapping, no designation concerning pixel correspondence between the source and the destination images is given from outside. However, when correspondence between particular pixels of the two images is input beforehand, the mappings can be determined using he correspondence as a constraint.


[0187] The basic idea is to roughly distort the source image by the approximate mapping that maps the specified pixels of the source image to the specified pixels of the destination images, and then compute the accurate mapping f.


[0188] First, the specified pixels of the source image are mapped to the specified pixels of the destination image to determine the approximate mapping that maps other pixels of the source image to appropriate locations. Pixels in the vicinity of the specified pixel are mapped to the locations near the position where the specified one is mapped. The approximate mapping at the m-th level in the resolution hierarchy is denoted as F(m).


[0189] The approximate mapping F is determined as follows. First, the mapping for several pixels are specified. When ns pixels


p(i0,j0),p(i1,j1), . . . p(iηs−1,jηs−1)   (44)


[0190] of the source image are to be specified, the values


F(n)(i0,j0)=(k0,l0),


F(n)(i1,j0)=(k1,l1), . . . ,


F(n)(ins−1,jns−1)=(kns−1, lns−1)   (45)


[0191] are determined.


[0192] For the remaining pixels of the source image, the amount of displacement is the weighted average of the displacement of p(ih,jh) (h=0, . . . , ns−1); i.e., a pixel p(i,j) is mapped to the pixel of the destination image at
33F(m)(i,j)=(i,j)+h=0h=ns-1(kh-ih,lh-jh)weighth(i,j)2n-m(46)


[0193] wherein
34weighth(i,j)=1/&LeftDoubleBracketingBar;(ih-i,jh-j)&RightDoubleBracketingBar;2totalweight(i,j)(47)totalweight(i,j)=h=0h=ns-11/&LeftDoubleBracketingBar;(ih-i,jh-j)&RightDoubleBracketingBar;2.(48)


[0194] Second, the energy D(m,s)(i,j) of the candidate mappings f is changed so that mappings f similar to F(m) have lower energy. To be precise,
35D(i,j)(m,s)=E0(i,j)(m,s)+ηE1(i,j)(m,s)+κE2(i,j)(m,s)(49)


[0195] wherein
36E2(i,j)(m,s)={0,if&LeftDoubleBracketingBar;F(m)(i,j)-f(m,s)(i,j)&RightDoubleBracketingBar;2ρ222(n-m)&LeftDoubleBracketingBar;F(m)(i,j)-f(m,s)(i,j)&RightDoubleBracketingBar;2,otherwise(50)


[0196] and k, p≧0. Finally, the automatic computing process of mappings described before determines f completely.


[0197] Note that E2(m,s)(i,j) becomes 0 if f(m,s)(i,j) is sufficiently close to F(m)(i,j) i.e., within
37[ρ222(n-m)].(51)


[0198] It is defined so because it is desirable to determine each value f(m,s)(i,j) automatically to fit in an appropriate place in the destination image so long as it is close to F(m)(i,j). Because of this, it is unnecessary to specify the precise correspondence in detail; the source image is automatically mapped so that it matched the destination image.


[0199] [2] Concrete Processing Procedure


[0200] The flow of the process using the respective elemental techniques described in [1] will be described.


[0201]
FIG. 6 is the flowchart of the entire procedure of this embodiment. Referring to this drawing, processing using a multiresolutional critical point filter is first performed (S1). A source image and a destination image are then matched (S2). S2 is not always necessary for the present invention, and other processing, such as image recognition, may be performed instead, based on the characteristics of the image obtained at S1.


[0202]
FIG. 7 is a flowchart showing the details of the process at S1 in FIG. 6. This process is performed on the assumption that a source image and a destination image are matched at S2. To be specific, a source image is first hierarchized using a critical point filter (S10) to obtain a series of source hierarchical images. Then, a destination image is hierarchized in the same way (S11) to obtain a series of destination hierarchical images. The order of S10 and S11 in the flow is arbitrary, and they may be performed in parallel.


[0203]
FIG. 8 is a flowchart showing the details of the process at S10 in FIG. 7. The size of the original source image is defined 2n×2n. Since source hierarchical images are sequentially generated from a fine resolution to a coarse resolution, the parameter m which indicates the level of resolution to be processed is set at n (S100). Then, critical points are detected from the images p(m,0), p(m,1), p(m,2), p(m,3), of the m-th level of resolution, using a critical point filter (S101) so that the images p(m−1,0), p(m−1,1), p(m−1,2), p(m−1,3) of the (m−1)th level are generated (S102). Since m=n here, p(m,0)=p(m,1)=p(m,2)=p(m,3)=p(n) holds, and four types of subimages are thus generated from a single source image.


[0204]
FIG. 9 shows correspondence between partial images of the m-th and (m−1)th levels of resolution. Respective values in the drawing represent the intensity of respective pixels. p(m,s) stands for any one of the four images p(m,0) to p(m,3), and is regarded as p(m,0) for the generation of p(m−1,0). From the block in FIG. 9, comprising four pixels with their pixel intensity values shown inside, images p(m−1,0), p(m−1,1), p(m−1,2), and p(m−1,3) obtain “3,” “8,” “6,” and “10”, respectively, according to the rules described in [1.2]. This block at the m-th level is replaced at the (m−1)th level by respective single pixels obtained. Thus, the size of the subimages at the (m−1)th level is 2m−1×2m−1.


[0205] After m is decremented (S103 in FIG. 8), it is ensured that m is not negative (S104). The process returns to S101, where subimages of the next level of resolution, i.e., a courser level by one, are generated. The above process is repeated until subimages at m=0, i.e., at the 0-th level, are generated to complete the process at S10. The size of the subimages at the 0-th level is 1×1.


[0206]
FIG. 10 shows source hierarchical images for n=3 generated at S10. The initial source image is the sole image which is common to the four series. The following four types of subimages are generated independently, depending on the type of a critical point. Note that the process in FIG. 8 is common to S11 in FIG. 7, and that destination hierarchical images are generated through the same procedure. With the above, the process at S1 in FIG. 6 is completed.


[0207] Upon completion of the process at S1, preparation for matching evaluation is made before proceeding to S2 in FIG. 6. FIG. 11 shows the preparation procedure. To be specific, a plurality of evaluation equations are determined (S30), namely the energy C(m,s)f relative to a pixel value, introduced in [1.3.2.1], and the energy D(m,s)f relative to the smoothness of the mapping introduced in [1.3.2.2]. By combining these equations, a combined evaluation equation is determined as λC(m,s)f+D(m,s)f (S31). Using η introduced in [1.3.2.2], a combined evaluation equation is denoted


ΣΣ(λC(m,s)(i,j)+ηE0(m,s)(i,j)+E1(m,s)(i,j))   (52)


[0208] The sum is computed respectively for i and j being 0, 1 . . . , 2m−1. With the above, preparation for matching evaluation is completed.


[0209]
FIG. 12 is a flowchart showing the details of the process at S2 in FIG. 6. As described in [1], source hierarchical images and destination hierarchical images are matched between images at the same level of resolution. For detecting global corresponding correctly, a matching is calculated at from a coarse level to a fine level of resolution. Since a critical point filter is used in generation of source and destination hierarchical images, the location and intensity of critical points are clearly preserved even at a coarse level. This makes it possible to seize global correspondence more accurately than by a conventional method.


[0210] Referring to FIG. 12, coefficient parameter η and level parameter m are set at 0 (S20). A matching is computed between respective four subimages at the m-th level of the source hierarchical images and of the destination hierarchical images so that four types of submappings f(m,s) (s=0, 1, 2, 3) which satisfy the BC and minimize the energy are obtained (S21). The BC is checked using an inherited quadrilateral described in [1.3.3]. The submappings at the m-th level are constrained by those at the (m−1)th level, as indicated by equations 17 and 18. The matching calculated at a coarser level of resolution is thus sequentially used in subsequent calculation of a matching. This may be called vertical reference between different levels. When m=0, there is no courser level and the process therein should be different from the process, which will be described later referring to FIG. 13.


[0211] Horizontal reference is also performed at the same level. As indicated by equation 20 in [1.3.3], f(m,3), f(m,2), and f(m,1) are respectively defined so as to be analogous to f(m,2), f(m,1), and f(m,0) because it is unnatural that submappings relative to different types of critical points are completely different from one another so long as they are originally included in the same source or destination images. As is known from equation 20, when submappings are closer to each other, the energy becomes smaller and the matching is considered as more preferable.


[0212] As for f(m,0), which is to be initially determined, a coarser level by one is referred to since there is no other submapping at the same level available for reference. In the implementation, however, f(m,0) is determined after the submappings were obtained up to f(m,3), by updating f(m,0) initially determined using the submapping f(m,3) as a constraint. This process is equivalent to the process in which s=4 is substituted into equation 20 to make f(m,4) final f(m,0). The above process is employed to avoid the tendency that the relevance between f(m,0) and f(m,3) becomes too low. This scheme actually produced a preferable result. In addition to the above, the submappings are shuffled in the implementation as described in [1.7.1] so as to maintain close relevance among submappings which are originally determined independently for every type of critical point. Further, in order to prevent the processing from being biased due to repeated use of the same starting point, the location thereof is changed according to the value of s, as described in [1.7].


[0213]
FIG. 13 shows the way how the submapping is determined at the 0-th level. At the 0-th level, an identity mapping is automatically selected for each of the four submappings f(m,s) since each of f(m,s) is constituted of a single pixel. FIG. 14 shows the way how the submappings are determined at the first level. At the first level, each of the submappings is constituted of four pixels, which are indicated by a solid line in the drawing. A corresponding point (pixel) of the point (pixel) x in p(l,s) is searched within q(l,s) according to the following procedure.


[0214] 1. An upper left point a, an upper right point b, a lower left point, a lower right point d of the point x are searched at the first level of resolution.


[0215] 2. Pixels to which the points a to d belong at a coarser level by one, i.e., the 0-th level, are searched. In FIG. 14, the points a to d belong to the pixels A to D, respectively. The points A to C are hypothetical pixels which only exist virtually.


[0216] 3. The corresponding points A′ to D′ of the pixels A to D, which have already been defined at the 0-th level, are plotted in q(l,s). The pixels A′ to C′ are also hypothetical pixels and assumed to be located at the same positions as the pixels A to C.


[0217] 4. Presuming that the corresponding point a′ of the point a in the pixel A is located inside the pixel A′, the point a′ is accordingly plotted under the assumption that the point a occupies the same location in the pixel A as that of the point a in the pixel A (lower right here).


[0218] 5. The corresponding points b′ to d′ are plotted using the same method as the above so as to constitute an inherited quadrilateral using the points a′ to d′.


[0219] 6. The corresponding point x′ of the point x is searched such that the energy becomes minimized in the inherited quadrilateral. Candidate corresponding points x′ may be limited to the pixels, for instance, whose centers are included in the inherited quadrilateral. In FIG. 14, four pixels all become candidates.


[0220] The above described is a procedure for determining the corresponding point of a given point x. The same processing is performed to all the other points for determination of the submappings. As a deformed inherited quadrilateral is expected-at the second and upper levels, the pixels A′ to D′ are located, getting apart from one another as shown in FIG. 3.


[0221] Once four submappings at the m-th level are determined as described above, m is incremented (S22 in FIG. 12). It is then ensured that m does not exceed n (S23) before the process returns to S21. Hereinafter, every time the process returns to S21, submappings at a finer level of resolution are obtained until the process finally returns to S21 where the mapping f(n) at the n-th level is determined. The above mapping is denoted as f(n)(η=0) , as it has been determined relative to η0.


[0222] Next, to obtain the mapping for the next η, η is shifted by Δη and m is reset to zero (S24). After ensuring that new η does not exceed a predetermined threshold ηmax (S25), the process returns to S21, where the mapping f(η) (η=Δη) for the new η is obtained. This process is repeated to obtain f(n) (η=iΔη) (i=0, 1, . . . ) at S21. When η exceeds ηmax the process proceeds to S26, where the optimal η=ηopt is determined using a method described later, and f(n) (η=ηopt) becomes the final mapping f(n).


[0223]
FIG. 15 is a flowchart showing the details of the process at S21 in FIG. 12. According to this flowchart, the submappings at the m-th level are determined for a certain η. In determination of the mappings, the optimal λ is defined independently for every submapping in this embodiment.


[0224] Referring to FIG. 15, s and λ are reset to zero (S210). Then, the submapping f(m,s) that minimizes the energy is obtained for then λ (and implicitly for then η) (S211), and denoted as f(m,s) (λ=0). To obtain a mapping for the next λ, λ is shifted by Δλ (S212), and it is ensured that it does not exceed a predetermined threshold λmax (S213). The process then returns to S211, where f(m,s) (λ=iΔλ) (i=0, 1, . . . ) is obtained. When λ exceeds λmax, the process proceeds to S214 to determine the optimal λ=λopt, and f(m,0) (λ=λopt) becomes the final mapping f(m=0) (S214).


[0225] Subsequently, λ is reset to zero and s is incremented to obtain the next submapping at the same level (S215). After ensuring that s does not exceed 4 (S216), the process returns to S211. When s=4, f(m,0) is updated reflecting f(m,3) as described above to complete determination of submappings at that level.


[0226]
FIG. 16 shows the behavior of the energy C(m,s)f corresponding to f(m,s) (λ=iΔλ) which has been obtained as to certain m and s while changing λ. As described in [1.4], as λ increases, C(m,s)f normally decreases and turns to increase when λ exceeds the optimal value. In this embodiment, λ with minimum C(m,s)f is determined as λopt. Even if C(m,s)f becomes smaller again in the range λ>λopt, the mapping has already been spoiled by then. For this reason, the first minimum should be noticed. λopt is independently determined for every submapping including for f(n).


[0227]
FIG. 17 shows the behavior of the energy C(n)f corresponding to f(n)(ηiΔη) which has been obtained while changing η. Here again, since C(n)f normally decreases as η increases and turns to increase when it exceeds the optimal value, η with minimum C(n)f is determined as ηopt. FIG. 17 can be taken as an enlarged diagram around zero along the horizontal axis in FIG. 4. Once ηopt is decided, f(n) can be finally determined.


[0228] As described above, this embodiment can provide various merits. First, since edge detection is unnecessary, problems occurring in connection with the conventional technique of edge detection type can be solved. Further, prior knowledge about objects included in an image is unnecessary, and automatic detection of corresponding points is achieved. Using a critical point filter, it is possible to preserve intensity and locations of critical points even at a coarse level of resolution. This has strong application to object recognition, characteristic extraction, and image matching. As a result, it is possible to construct an image processing system which significantly reduces manual labors.


[0229] Some extensions or modifications of the above embodiment may be made as follows:


[0230] i) Parameters are automatically determined while matching source and destination hierarchical images in the above embodiment. This method can be applied not only to matching hierarchical images but also to matching two images.


[0231] For instance, energy E0 relative to a difference in the intensity of pixels and energy E1 relative to a positional displacement of pixels between two images may be used as evaluation equations, and a linear sum of these equations, i.e., Etot=E0+E1, may be used as a combined evaluation equation. While noting or paying attention to the neighborhood of the extreme of this combined evaluation equation, α is automatically determined. That is, mappings which minimize Etot are obtained for various α. α with minimum E1 concerning α is determined as an optimal parameter αopt, and the mapping corresponding to αopt among various mappings obtained above is finally determined as the optimal mapping between those images.


[0232] Many other methods are available for determination of evaluation equations. For instance, an equation which becomes larger for a better estimation, such as 1/E1 and 1/E2, may be employed. A combined evaluation equation is not necessarily a linear sum, and an n-fold sum (n=2, ½, −1, −2, etc.), a polynomial, an arbitrary function may be employed as desired.


[0233] The system may employ a single parameter such as the above α, two parameters such as η and λ or more parameters. When multiple parameters are used, they are determined while changing one by one.


[0234] ii) In the described embodiment, a parameter is determined in the process in which the mapping is first determined such that the value of a combined evaluation equation is minimized, and the point at which one of the evaluation equations constituting the combined evaluation equation, namely, C(m,s)f, is minimized is then detected.


[0235] However, instead of this two-step processing, a parameter may be effectively determined in some cases simply such that the minimum of a combined evaluation equation becomes the smallest. In such a case, αE0+βE1, for instance, may be employed as a combined evaluation equation, and α+β=1 is imposed as a constraint for equal treatment of respective evaluation equations. The essence of automatic determination of a parameter lies in so determining the parameter that it minimizes the energy.


[0236] iii) In the above embodiment, four types of submappings are generated for four types of critical points at respective levels of resolution. However, one, two, or three types out of the four may be selectively used. For instance, if only one bright point exists in an image, generation of hierarchical images based solely on f(m,3) which is relative to a maximum, can produce sufficient effect. In this case, no other submapping is necessary at the same level, and the amount of computation relative to s is effectively reduced.


[0237] iv) In the above embodiment, as the level of resolution of an image is advanced by one through a critical point filter, the number of pixels becomes ¼. However, assuming that one block is consisted of 3×3 pixels, and critical points are searched in this 3×3 block, the number of pixels is reduced to {fraction (1/9)} as the level advances by one.


[0238] v) When the source and the destination images are color images, they are first converted to monochrome images, and the mappings are then computed. The original color images are then transformed by the resulting mappings. Other methods may include the computation of submapping for respective RGB component.


[0239] [4] Experimental Results


[0240] Using the described embodiment, various images can be interpolated. When two images from different viewpoints are interpolated, images from intermediate viewpoints can be generated. This has strong applications for World Wide Web (WWW) based information servers because it enables generating arbitrary views from a limited number of images. When images of two persons faces are interpolated, it is possible to perform morphing. When the images are cross-sections of 3D objects such as CT and MRI data, the interpolation enables to reconstruct accurate 3D object shapes for volume rendering.


[0241] FIGS. 18(a), 18(b), and 18(c) relate to a case where the mapping is used for generating intermediate view point images. A left-eye image and a right-eye image are interpolated here. FIG. 18(a) shows the source image viewed with a left eye; FIG. 18(b) shows that with a right eye; and FIG. 18(c) shows the resulting intermediate image wherein the value of the parameter t described in [1.8] is 0.5 for simplicity.


[0242] FIGS. 19(a), 19(b), and 19(c) relate to a case where the mapping is used for morphing of human faces. Two facial images of different persons are interpolated. FIG. 19(a) shows the source image; FIG. 19(b) shows the destination image; FIG. 19(c) shows the source image superimposed by the destination image; and FIG. 19(d) shows the resulting intermediate image wherein t=0.5.


[0243] FIGS. 20(a), 20(b), and 20(c) relate to a case where the mapping is used for interpolation of a human face and a face of a cat. FIG. 20(a) shows a face of a cat; FIG. 20(b) shows a morphing image of a human face and a face of a cat. FIG. 19(a) is used for the human face. The normalization of the pixel intensity described in [1.6] is used only in this example.


[0244] FIGS. 21(a), 21(b), and 21(c) relate to a case where the system is applied to images including a number of objects. FIG. 21(a) shows the source image; FIG. 21(b) shows the destination image; and FIG. 21(c) is the resulting intermediate image wherein t=0.5.


[0245] FIGS. 22(a), 22(b), 22(c), and 22(d) show the result where the mapping is used for interpolating images of a human brain whose cross sections are obtained by MRI. FIG. 22(a) shows the source image; FIG. 22(b) shows the destination image (the upper cross section); FIG. 22(c) shows the intermediate image wherein t=0.5; and FIG. 22(d) shows the oblique view of the result of the volume rendering with four cross sections. The object is completely opaque and the interpolated pixels whose intensity is larger than 51(=255×0.2) are displayed. The reconstructed object is then cut vertically near the center to show the interior of the volume.


[0246] In these examples, an MRI image is of 512×512 pixels, and other images are of 256×256 pixels. The intensity of a pixel varies from 0 to 255. The SJ condition described in [1.3.1] is used in all the application examples except for FIGS. 21(a) to 21(c). In all the application examples, B0thres=0.003 and B1thres=0.5 are used, and it has not been necessary to modify their values for any image. The pixel intensity of each subimages has been normalized in FIGS. 20(a) and 20(b) only.


[0247] The writing of this specification necessitated choosing or creating a number of terms and it should be noted that these choices were made in order to describe the current invention, and not to limit it from the full scope and spirit as claimed.


Claims
  • 1. A multiresolutional filtering method comprising: a detection step of detecting a critical point through a two dimensional search carried out on a first image; and a generation step of generating a second image having a lower resolution than that of the first image through extraction of the critical point detected.
  • 2. A method as defined in claim 1, wherein a critical point is searched for inside each of a plurality of blocks constituting the first image.
  • 3. A method as defined in claim 2, wherein a critical point is detected by searching for a point having either a maximum or minimum pixel value in two directions of each of the blocks .
  • 4. A method as defined in claim 3, wherein a pixel having a maximum pixel value in the two directions is detected as a maximum.
  • 5. A method as defined in claim 3, wherein a pixel having a minimum pixel value in the two directions is detected as a minimum.
  • 6. A method as defined in claim 3, wherein a pixel having a maximum pixel value in one of the two directions and a minimum pixel value in the other direction is detected as a saddle point.
  • 7. A method as defined in claim 3, wherein each of the blocks includes four pixels consisting of two pixels in a horizontal direction and two pixels in a vertical direction; and each of the four pixels is classified into either a maximum, a minimum, or one of two types of saddle points.
  • 8. A method as defined in claim 2, wherein an image of a critical point detected inside a block is made to represent an image of the block to thereby reduce resolution of the image.
  • 9. A method as defined in claim 2, wherein the second image is generated for each type of a critical point detected inside each of the blocks.
  • 10. An image matching method comprising: a first step of generating source hierarchical images each having a different resolution through multiresolutional critical point filtering carried out to a source image; a second step of generating destination hierarchical images each having a different resolution through multiresolutional critical point filtering carried out to a destination image; and a third step of matching the source hierarchical images and the destination hierarchical images.
  • 11. A method as defined in claim 10, wherein a mapping between an image of a certain level of resolution among the source hierarchical images and an image of the same level of resolution among the destination hierarchical images is determined in consideration of a mapping at another predetermined level of resolution.
  • 12. A method as defined in claim 11, wherein the mapping is determined using the mapping at the predetermined level of resolution as a constraint.
  • 13. A method as defined in claim 11, wherein the predetermined level of resolution is a coarser level than that at which the mapping is currently determined.
  • 14. A method as defined in claim 13, wherein the predetermined level of resolution is one level coarser than that at which the mapping is currently determined.
  • 15. A method as defined in claim 11, wherein a mapping is first determined at a coarsest level of resolution, and then sequentially at finer levels of resolution.
  • 16. A method as defined in claim 11, wherein the mapping is determined so as to satisfy Bijectivity conditions.
  • 17. A method as defined in claim 16, wherein a relaxation is provided to the Bijective conditions.
  • 18. A method as defined in claim 17, wherein the relaxation is to allow a mapping to be retraction.
  • 19. A method as defined in claim 11, wherein the source hierarchical images and the destination hierarchical images are generated for each type of a critical point, and the mapping is computed for each type of a critical point.
  • 20. A method as defined in claim 19, wherein a mapping is computed for a certain type of a critical point in consideration of a mapping which has already been obtained for another type of a critical point at the same level of resolution.
  • 21. A method as defined in claim 20, wherein the mapping is computed under a condition that the mapping should be similar to the mapping which has already been obtained.
  • 22. A method as defined in claim 10, wherein a plurality of evaluation equations are defined according to a plurality of matching evaluation items; the plurality of evaluation equations are combined so as to define a combined evaluation equation; and an optimal matching is searched while noting the neighborhood of an extreme of the combined evaluation equation.
  • 23. A method as defined in claim 22, wherein the combined evaluation equation is defined as a sum of the plurality of equation equations at least one of which has been multiplied by a coefficient parameter.
  • 24. A method as defined in claim 23, wherein each of the plurality of evaluation equations takes a smaller value for better evaluation, and the coefficient parameter is automatically determined so that a minimum of the combined evaluation equation becomes its smallest value.
  • 25. A method as defined in claim 23, wherein each of the plurality of evaluation equations takes a larger value for better evaluation, and the coefficient parameter is automatically determined so that a maximum of the combined evaluation equation becomes its largest value.
  • 26. A method as defined in claim 23, wherein the coefficient parameter is automatically determined by detecting the neighborhood of an extreme of one of the plurality of evaluation equations.
  • 27. A method as defined in claim 22, wherein the combined evaluation equation is defined as a linear sum of a first evaluation equation for a pixel value and a second evaluation equation for a pixel location; a value of the first evaluation equation is recorded when the combined evaluation equation takes a value which is in the neighborhood of an extreme while varying a coefficient parameter of at least the first evaluation equation; and the coefficient parameter is fixed when the first evaluation equation takes a value which is in the neighborhood of an extreme and is used in subsequent evaluations.
  • 28. An image matching method wherein, for matching a source image and a destination image, an evaluation equation is set for each of a plurality of matching evaluation items; the plurality of evaluation equations are combined so as to define a combined evaluation equation; and an optimal matching is searched while noting the neighborhood of an extreme of the combined evaluation equation.
  • 29. A method as defined in claim 28, wherein the combined evaluation equation is defined as a sum of the plurality of equation equations at least one of which has been multiplied by a coefficient parameter.
  • 30. A method as defined in claim 29, wherein each of the plurality of evaluation equations takes a smaller value for better evaluation, and the coefficient parameter is automatically determined so that a minimum of the combined evaluation equation becomes its smallest value.
  • 31. A method as defined in claim 29, wherein each of the plurality of evaluation equations takes a larger value for better evaluation, and the coefficient parameter is automatically determined so that a maximum of the combined evaluation equation becomes its largest value.
  • 32. A method as defined in claim 29, wherein the coefficient parameter is automatically determined by detecting the neighborhood of an extreme of one of the plurality of evaluation equations.
  • 33. A method as defined in claim 28, wherein the combined evaluation equation is defined as a linear sum of a first evaluation equation for a pixel value and a second evaluation equation for a pixel location; a value of the first evaluation equation is recorded when the combined evaluation equation takes a value which is in the neighborhood of an extreme while varying a coefficient parameter of at least the first evaluation equation; and the coefficient parameter is fixed when the first evaluation equation takes a value which is in the neighborhood of an extreme and is used in subsequent evaluations.
  • 34. A multiresolutional filtering method, wherein a critical point is detected in a first image by performing a two dimensional search, and a second image having a lower resolution than that of the first image is generated with the critical point detected.
  • 35. An image matching method, wherein source hierarchical images each having a different resolution is generated through multiresolutional critical point filtering carried out to a source image; destination hierarchical images each having a different resolution is generated through multiresolutional critical point filtering carried out to a destination image; and the source hierarchical images and the destination hierarchical images are matched.
Priority Claims (1)
Number Date Country Kind
9-095318 Mar 1997 JP
Divisions (1)
Number Date Country
Parent 08848099 Apr 1997 US
Child 09433368 Nov 1999 US
Continuations (2)
Number Date Country
Parent 09693981 Oct 2000 US
Child 10006237 Dec 2001 US
Parent 09433368 Nov 1999 US
Child 09693981 Oct 2000 US