Since the advent of digital cameras and video camcorders, multimedia content creation has become a much easier task for both professional and amateur photographers. As the sizes of personal media collections continue to grow, the problem of media organization, management and utilization has become a much more pressing issue. Recently, many intelligent multimedia management tools have been built by the research community to attack this problem, such as content-based image/video retrieval and semantic tagging. One core problem underneath these content-analysis and management tools is the issue of image matching; that is, given two images, how to quantify their “similarity” such that it truly reflects users' perceptual similarity in the problem domain. This image matching problem has been heavily researched for decades. Recently there has been increasing interest in using these basic matching techniques to detect near duplicates among image/video collections, mainly due to its wide range of potential applications, such as personal image clustering and video threading.
What are needed are improved apparatus and methods of detecting matching images with high efficiency and effectiveness.
In one aspect, the invention features a method in accordance with which a first set of local features is extracted from a first image and a second set of the local features is extracted from a second image. One or more candidate matches of the local features in the first set and the second set are determined. For each of the candidate matches, the following operations are performed. A first group of a specified number of nearest neighbors of the local feature of the candidate match in the first image is selected. A second group of the specified number of nearest neighbors of the local feature of the candidate match in the second image is chosen. Matches between the neighboring local features in the first group and corresponding neighboring local features in the second group are ascertained. The candidate match is designated as either a true match or a non-match based on the ascertained matches between nearest neighbor local features. The first and second images are classified as either near duplicate images or non-near duplicate images based on the true matches.
In another aspect, the invention features a method in accordance with which features in a current feature set are extracted from a first image and a second image. The current feature set is in a sequence of successive feature sets that consist of respective sets of constituent features and are arranged in order of increasing computational cost associated with extraction of their respective constituent features. The first image and the second image are classified as either near duplicate image pair or candidate non-near-duplicate image pair based on the extracted features. In response to each classification of the first image and the second image as candidate non-near duplicate images based on the extracted values of the current feature set, the extraction and classification are repeated with the next successive one of the feature sets following the current feature set in the sequence as the current feature set.
In another aspect, the invention features a method in accordance with which a sequence of successive feature sets is determined. The features sets consist of respective sets of constituent features and are arranged in order of increasing computational cost associated with extraction of values of their respective constituent features. A cascade of successive classification stages is built. In this process, each of the classification stages is trained on a respective one of the feature sets such that the classification stage is operable to classify images as either near duplicate images or candidate non-near duplicate images based on the features of the respective feature set that are extracted from the images. The classification stages are arranged successively in the order of the successive feature sets in the sequence.
The invention also features apparatus operable to implement the methods described above and computer-readable media storing computer-readable instructions causing a computer to implement the methods described above.
In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
I. DEFINITION OF TERMS
The term “near duplicate” refers to an image that contains substantially the same content as another image. Two images containing substantially the same content are considered near duplicates even if they have different layouts or formats. The process of detecting near duplicates of a given image also will detect exact duplicates of the given image.
A “computer” is any machine, device, or apparatus that processes data according to computer-readable instructions that are stored on a computer-readable medium either temporarily or permanently. A “computer operating system” is a software component of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources. A “software application” (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks. A “data file” is a block of information that durably stores data for use by a software application.
II. DETECTING NEAR DUPLICATE IMAGES
The embodiments that are described herein provide systems and methods that are capable of detecting near duplicate images with high efficiency and effectiveness.
For each of the candidate matches, the feature processor 12 performs the following operations (
The classifier 14 classifies the first and second images as either near duplicate images or non-near duplicate images based on the true matches (
In general, any of a wide variety of different local descriptors may be used to extract the local feature values (
In some embodiments, the feature processor 12 applies an ordinal spatial intensity distribution (OSID) descriptor to the first and second images 16, 18 to produce respective ones of the local feature values 24. The OSID descriptor is obtained by computing a 2-D histogram in the intensity ordering and spatial sub-division spaces, as described in F. Tang, S. Lim, N. Chang and H. Tao, “A Novel Feature Descriptor Invariant to Complex Brightness Changes,” CVPR 2009 (June 2009). By constructing the descriptor in the ordinal space instead of raw intensity space, the local features are invariant to any monotonically increasing brightness changes, improving performance even in the presence of image blur, viewpoint changes, and JPEG compression. In some embodiments, the feature processor 12 first detects local feature regions in the first and second images 16, 18 using, for example, a Hessian-affine region detector, which outputs a set of affine normalized image patches. An example of a Hessian-affine region detector is described in K. Mikolajczyk et al., “A comparison of affine region detectors,” International Journal of Computer Vision (IJCV) (2005). The feature processor 12 applies the OSID descriptor to the detected local feature regions to extract the OSID feature values from the first and second images 16, 18. This approach makes the resulting local feature values robust to view-point changes.
In some embodiments, the feature processor 12 determines the candidate matches (
The feature processor 12 prunes the initial set of candidate matches based on the degree to which the local structure (represented by the nearest neighbor local features) in the neighborhoods of the local features of the candidate matches in the first and second images 16, 18 match (
The local structure/neighborhood of fis in feature set S is denoted LSis={fi1s,fi2s, . . . , fKs}, which are the nearest K local features in S to the feature fis. Similarly, the local structure of fjd in feature set D is denoted LSjD={fj1d, fj2d, . . . , fKd}. The feature processor 12 prunes the set of candidate matches by comparing the local structures LSiS and LSjD. If there is sufficient match between the local structures of a given candidate local feature in the first and second images 16, 18, then the candidate match is designated as a true match; otherwise the candidate match is designated as a non-match and is pruned from the set (
In some embodiments, for each of the candidate matches, the feature processor 12 tallies the ascertained matches between nearest neighbor local features to obtain a count of the ascertained matches, and designates the candidate match as either a true match or a non-match based on the application of a threshold to the count of the ascertained matches. In some of these embodiments, the feature processor 12 determines how many feature pairs with one from LSiS and the other from LSjD belong to the initial set of candidate matches M, and this matched set is denoted as LSMi,j={{fms,fnd}∈M, fms∈LSis, fnd∈LSjd}. The confidence of the match {fis,fid} is denoted card(LSMi,j)/K, where card(*) is the cardinality of the set. If the confidence is below a threshold level, the feature processor 12 regards the candidate match as a mismatch and prunes it from the set. The final set of true matches for an image pair Ip and Iq is denoted as FM={{iIp,fjIq}, 1≦i≦Ns, 1≦j≦Nd}.
After the feature processor 12 identifies the final set of true matches (
In other embodiments, the feature processor 12 determines a weighted sum of the true matches, where the sum is weighted based on locations of the local features of the true matches in the first and second images. In some of these embodiments, the feature processor 12 takes into account the users attention by giving more weight to those true match features that fall within a specified attention region. In general, the attention region may be defined in a variety of different ways. In some embodiments, the attention region is defined as a central region of an image. A weighting mask is defined with respect to the attention region, where the weights assigned to locations in the attention region are higher than the weights assigned to locations outside the attention region. In some embodiments, the weighting mask is a Gaussian weighting mask (W(x,y)) that gives more weight to true match local features that are close to the image center and less weight to the true match local features near the image boundary.
MS(Ip,Iq)=ΣiW(xG
where Gi=fiIp, and {fiIp}∈FM(Ip, Iq). In some embodiments, makes the matching score symmetric by computing the following symmetric matching score (SMS):
SMS(Ip,Iq)=(MS(Ip,Iq)+MS(Iq,Ip))/2 (2)
In some embodiments, the classifier 14 classifies the first and second images as either matching near duplicate or non-near duplicate images based on the symmetric matching score (
In some embodiments, the classifier 14 discriminates near duplicate images from non-near duplicate images classification based on the symmetric matching scores defined in equation (2) and one or more other image features, including image metadata, global image features, and local image features.
In some embodiments, the feature processor 12 extracts values of metadata features (e.g. camera model, shot parameters, image properties, and capture time metadata) from the first and second images 16, 18, and the classifier 14 classifies the first and second images 16, 18 as either near duplicate images or non-near duplicate images based on the extracted metadata feature values. The metadata features typically are extracted from an EXIF header that is associated with each image 16, 18. One exemplary metadata feature that is used by embodiments of the classifier 14 for detecting a match between two images is the difference in the capture time metadata of the two images.
In some embodiments, the feature extractor 12 extracts one or more global features (e.g., adaptive color histogram) from the first and second images 16, 18, and the classifier 14 classifies the first and second images 16, 18 as either matching images or non-matching images based on the extracted adaptive color histograms. In some of these embodiments, an adaptive color histogram is extracted from each of the images 16, 18 and used by the classifier 14 for match detection. In these embodiments, the number of bins in the color histograms and their quantization are determined by adaptively clustering image pixels in LAB color space. One exemplary metadata feature that is used by embodiments of the classifier 14 for detecting a match between two images is the difference or dissimilarity between the adaptive color histograms of the two images. In some embodiments this dissimilarity is measured by the Earth Mover Distance measure, which is described in Y. Rubner et al., “The earth mover distance as a metric for image retrieval,” IJCV, 40(2) (2000).
In some embodiments, the classification stages 62 are ordered in accordance with the computational cost associated with the extraction of the features on which the classifications are trained, where the front-end classifiers are trained on features that are relatively less computationally expensive to extract and the back-end classifiers are trained on features that are relatively more computationally expensive to extract. This classifier structure, together with an on-demand feature extraction process in which only those features that are required by the current classification stage are extracted, yields significant efficiency gains and computational cost savings. The end result is that the easy image pair samples tend to get classified with cheap features; this not only reduces computational costs but also avoids the need to compute expensive features.
Some embodiments of the classifier building process of
The classification stages 62 are trained on progressively more expensive, yet more powerful feature spaces. At test time, if a test sample is rejected by cheap stage classifier Cl(ti), none of the rest of the more stage classifiers Cj(Tj), j>i, will be triggered, therefore avoiding the extraction of more expensive features.
III. EXEMPLARY OPERATING ENVIRONMENT
Each of the images 16, 18 (see
Embodiments of the image match detection system 10 may be implemented by one or more discrete modules (or data processing components) that are not limited to any particular hardware, firmware, or software configuration. In the illustrated embodiments, these modules may be implemented in any computing or data processing environment, including in digital electronic circuitry (e.g., an application-specific integrated circuit, such as a digital signal processor (DSP)) or in computer hardware, firmware, device driver, or software. In some embodiments, the functionalities of the modules are combined into a single data processing component. In some embodiments, the respective functionalities of each of one or more of the modules are performed by a respective set of multiple data processing components.
The modules of the image match detection system 10 may be co-located on a single apparatus or they may be distributed across multiple apparatus; if distributed across multiple apparatus, these modules and the display 24 may communicate with each other over local wired or wireless connections, or they may communicate over global network connections (e.g., communications over the Internet).
In some implementations, process instructions (e.g., machine-readable code, such as computer software) for implementing the methods that are executed by the embodiments of the image match detection system 10, as well as the data they generate, are stored in one or more machine-readable media. Storage devices suitable for tangibly embodying these instructions and data include all forms of non-volatile computer-readable memory, including, for example, semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
In general, embodiments of the image match detection system 10 may be implemented in any one of a wide variety of electronic devices, including desktop computers, workstation computers, and server computers.
A user may interact (e.g., enter commands or data) with the computer 140 using one or more input devices 150 (e.g., a keyboard, a computer mouse, a microphone, joystick, and touch pad). Information may be presented through a user interface that is displayed to a user on the display 151 (implemented by, e.g., a display monitor), which is controlled by a display controller 154 (implemented by, e.g., a video graphics card). The computer system 140 also typically includes peripheral output devices, such as speakers and a printer. One or more remote computers may be connected to the computer system 140 through a network interface card (NIC) 156.
As shown in
IV. CONCLUSION
The embodiments that are described herein provide systems and methods that are capable of detecting matching images with high efficiency and effectiveness.
Other embodiments are within the scope of the claims.