Banknote validation

Information

  • Patent Application
  • 20070140551
  • Publication Number
    20070140551
  • Date Filed
    March 02, 2006
    19 years ago
  • Date Published
    June 21, 2007
    17 years ago
Abstract
A method of creating a classifier for banknote validation is described. Information from all of a set of training images from genuine banknotes is used to form a segmentation template which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A banknote validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment the banknote validator is incorporated in a self-service apparatus such as an automated teller machine.
Description
TECHNICAL FIELD

The present invention relates to a method and apparatus for banknote validation.


BACKGROUND

There is a growing need for automatic verification and validation of banknotes of different currencies and denominations in a simple, reliable, and cost effective manner. This is required, for example, in self-service apparatus which receives banknotes, such as self-service kiosks, ticket vending machines, automated teller machines arranged to take deposits, self-service currency exchange machines and the like.


Previously, manual methods of currency validation have involved image examination, transmission effects such as watermarks and thread registration marks, feel and even smell of banknotes. Other known methods have relied on semi-overt features requiring semi-manual interrogation. For example, using magnetic means, ultraviolet sensors, fluorescence, infrared detectors, capacitance, metal strips, image patterns and similar. However, by their very nature these methods are manual or semi-manual and are not suitable for many applications where manual intervention is unavailable for long periods of time. For example, in self-service apparatus.


There are significant problems to be overcome in order to create an automatic currency validator. For example, many different types of currency exist with different security features and even substrate types. Within those different denominations also exist commonly with different levels of security features. There is therefore a need to provide a generic method of easily and simply performing currency validation for those different currencies and denominations.


Put simply, the task of a currency validator is to determine whether a given banknote is genuine or counterfeit. Previous automatic validation methods typically require a relatively large number of examples of counterfeit banknotes to be known in order to train the classifier. In addition, those previous classifiers are trained to detect known counterfeits only. This is problematic because often little or no information is available about possible counterfeits. For example, this is particularly problematic for newly introduced denominations or newly introduced currency.


In an earlier paper entitled, “Employing optimized combinations of one-class classifiers for automated currency validation”, published in Pattern Recognition 37, (2004) pages 1085-1096, by Chao He, Mark Girolami and Gary Ross (two of whom are inventors of the present application) an automated currency validation method is described (Pat. No. EP1484719, US2004247169). This involves segmenting an image of a whole banknote into regions using a grid structure. Individual “one-class” classifiers are built for each region and a small subset of the region specific classifiers are combined to provide an overall decision. (The term, “one-class” is explained in more detail below.) The segmentation and combination of region specific classifiers to achieve good performance is achieved by employing a genetic algorithm. This method requires a small number of counterfeit samples at the genetic algorithm stage and as such is not suitable when counterfeit data is unavailable.


There is also a need to perform automatic currency validation in a computationally inexpensive manner which can be performed in real time.


The invention seeks to provide an improved method and apparatus for banknote validation which overcomes or at least mitigates one or more of the problems mentioned above.


SUMMARY

A method of creating a classifier for banknote validation is described. Information from all of a set of training images from genuine banknotes is used to form a segmentation template which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A banknote validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment the banknote validator is incorporated in a self-service apparatus such as an automated teller machine.


We describe a method of creating a classifier for banknote validation. The method comprising the steps of:

    • accessing a training set of banknote images;
    • creating a segmentation template using the training set images;
    • segmenting each of the training set images using the segmentation template;
    • extracting one or more features from each segment in each of the training set images; and
    • forming the classifier using the feature information;


      wherein the segmentation template is created on the basis of information from all images in the training set.


By creating the segmentation template on the basis of information from all images in the training set we have found improved performance in banknote validation. In contrast, previous methods have used rigid grid structures for segmentation which do not require information from all the training set images to perform segmentation.


For example, the information from all images in the training set comprises morphological information. This can be pattern, color, texture and the like in the training set. We have found empirically that use of this type of information leads to improved banknote validation performance.


In an example, the information from all images in the training set comprises information about a pixel at the same location in each of the training set images. This can comprises pixel intensity profiles as explained in more detail below.


Preferably the segmentation template is created by using a clustering algorithm to cluster pixel locations in an image plane on the basis of the information from all the images in the training set. Any suitable clustering algorithm can be used as known in the art.


In a preferred embodiment the classifier is a one-class classifier. This is advantageous because by using a one-class classifier and the method of forming the segmentation template described above, we can remove the need for examples of counterfeit banknotes in the training set. Thus, preferably the training set images are of genuine banknotes only.


Preferably, the classifier is a statistical one-class classifier. These are typically less computationally intensive and perform better than neural network based approaches.


Preferably the step of forming the classifier comprises estimating a distribution of a statistic relating to banknotes in a target class, said target class comprising genuine currency.


In a particularly preferred embodiment the training set images are selected from any of reflection images, transmission images, visible information, non-visible information and other images such as magnetic, thermal and x-ray images.


It is also possible to use a feature selection algorithm to select one or more types of feature to use in the step of extracting features.


In addition the classifier can be formed on the basis of specified information about a particular denomination and currency of banknotes. For example, information about particularly data rich regions in terms of color or other information, spatial frequency or shapes in a given currency and denomination.


The invention also encompasses an apparatus for creating a banknote classifier comprising:

    • an input arranged to access a training set of banknote images;
    • a processor arranged to create a segmentation template using the training set images;
    • a segmentor arranged to segmenting each of the training set images using the segmentation template;
    • a feature extractor arranged to extract one or more features from each segment in each of the training set images; and
    • classification forming means arranged to form the classifier using the feature information;


      wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set.


The invention also encompasses a banknote validator comprising:

    • an input arranged to receive at least one image of a banknote to be validated;
    • a segmentation template;
    • a processor arranged to segment the image of the banknote using the segmentation template;
    • a feature extractor arranged to extract one or more features from each segment of the banknote image;
    • a classifier arranged to classify the banknote as being either valid or not on the basis of the extracted features;


      wherein the segmentation template has been formed on the basis of information about each of a set of training images of banknotes.


In one example the banknote validator further comprises a plurality of classifiers and a combiner arranged to combine the results of each of the classifiers.


The invention also encompasses a method of validating a banknote comprising:

    • accessing at least one image of a banknote to be validated;
    • accessing a segmentation template;
    • segmenting the image of the banknote using the segmentation template;
    • extracting features from each segment of the banknote image;
    • classifying the banknote as being either valid or not on the basis of the extracted features using a classifier;


      wherein the segmentation template has been formed on the basis of information about each of a set of training images of banknotes.


The invention also encompasses a computer program comprising computer program code means adapted to perform all the steps of any of the methods described above when said program is run on a computer.


The computer program can be embodied on a computer readable medium.


The invention also encompasses a self-service apparatus comprising:

    • a means for accepting banknotes,
    • imaging means for obtaining digital images of the banknotes; and
    • a banknote validator as described above.


The method may be performed by software in machine readable form on a storage medium. The method steps may be carried out in any suitable order and/or in parallel as is apparent to the skilled person in the art.


This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions, (and therefore the software essentially defines the functions of the register, and can therefore be-termed a register, even before it is combined with its standard hardware). For similar reasons, it is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.


The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.




BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:



FIG. 1 is a flow diagram of a method of creating a classifier for banknote validation;



FIG. 2 is a schematic diagram of an apparatus for creating a classifier for banknote validation;



FIG. 3 is a schematic diagram of a banknote validator;



FIG. 4 is a flow diagram of a method of validating a banknote;



FIG. 5 is a schematic diagram of a self-service apparatus with a banknote validator.




DETAILED DESCRIPTION

Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved.


The term “one class classifier” is used to refer to a classifier that is formed or built using information about examples only from a single class but which is used to allocate newly presented examples either to that single class or not. This differs from a conventional binary classifier which is created using information about examples from two classes and which is used to allocate new examples to one or other of those two classes. A one-class classifier can be thought of as defining a boundary around a known class such that examples falling out with that boundary are deemed not to belong to the known class.



FIG. 1 is a high level flow diagram of a method of creating a classifier for banknote validation.


First we obtain a training set of images of genuine banknotes (see box 10 of FIG. 1). These are images of the same type taken of banknotes of the same currency and denomination. The type of image relates to how the images are obtained, and this may be in any manner known in the art. For example, reflection images, transmission images, images on any of a red, blue or green channel, thermal images, infrared images, ultraviolet images, x-ray images or other image types. The images in the training set are in registration and are the same size. Pre-processing can be carried out to align the images and scale them to size if necessary, as known in the art.


We next create a segmentation template using information from the training set images (see box 12 of FIG. 1). The segmentation template comprises information about how to divide an image into a plurality of segments. The segments may be non-continuous, that is, a given segment can comprise more than one patch in different regions of the image. Preferably, but not essentially, the segmentation template also comprises a specified number of segments to be used.


Using the segmentation template we segment each of the images in the training set (see box 14 of FIG. 1). We then extract one or more features from each segment in each of the training set images (see box 16 of FIG. 1). By the term “feature” we mean any statistic or other characteristic of a segment. For example, the mean pixel intensity, median pixel intensity, mode of the pixel intensities, texture, histogram, Fourier transform descriptors, wavelet transform descriptors and/or any other statistics in a segment.


A classifier is then formed using the feature information (see box 18 of FIG. 1). Any suitable type of classifier can be used as known in the art. In a particularly preferred embodiment of the invention the classifier is a one-class classifier and no information about counterfeit banknotes is needed. However, it is also possible to use a binary classifier or other type of classifier of any suitable type as known in the art.


The method in FIG. 1 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and effectively. To create classifiers for other currencies or denominations the method is repeated with appropriate training set images.


Previously, (as mentioned in the background section) we used a segmentation technique that involved using a grid structure over the image plane and a genetic algorithm method to form the segmentation template. This necessitated using some information about counterfeit notes.


The present invention uses a different method of forming the segmentation template which removes the need for using a genetic algorithm or equivalent method to search for a good segmentation template within a large number of possible segmentation templates. This reduces computational cost and improves performance. In addition the need for information about counterfeit banknotes is removed.


We believe that generally it is difficult in the counterfeiting process to provide a uniform quality of imitation across the whole note and therefore certain regions of a note are more difficult than others to be copied successfully. We therefore recognized that rather than using a rigidly uniform grid segmentation we could improve banknote validation by using a more sophisticated segmentation. Empirical testing that we carried out indicated that this is indeed the case. Segmentation based on morphological characteristics such as pattern, color and texture led to a better performance in detecting counterfeits. However, traditional image segmentation methods, such as using edge detectors, when applied to each image in the training set were difficult to use. This is because varying results are obtained for each training set member and it is difficult to align corresponding features in different training set images. In order to avoid this problem of aligning segments we used, in one preferred embodiment, a so called “spatio-temporal image decomposition”.


Details about the method of forming the segmentation template are now given. At a high level this method can be thought of as specifying how to divide the image plane into a plurality of segments, each comprising a plurality of specified pixels. The segments can be non-continuous as mentioned above. In the present invention, this specification is made on the basis of information from all images in the training set. In contrast, segmentation using a rigid grid structure does not require information from images in the training set.


Consider the images in the training set as being stacked and in registration with one another in the same orientation. Taking a given pixel in the note image plane this pixel is thought of as having a “pixel intensity profile” comprising information about the pixel intensity at that particular pixel position in each of the training set images. Using any suitable clustering algorithm, pixel positions in the image plane are clustered into segments, where pixel positions in those segments have similar or correlated pixel intensity profiles.


In a preferred example we use these pixel intensity profiles. However, it is not essential to use pixel intensity profiles. It is also possible to use other information from all images in the training set. For example, intensity profiles for blocks of 4 neighboring pixels or mean values of pixel intensities for pixels at the same location in each of the training set images.


A particularly preferred embodiment of our method of forming the segmentation template is now described in detail. This is based on the method taught in the following publication “EigenSegments: A spatio-temporal decomposition of an ensemble of images” by Avidan, S. Lecture Notes in Computer Science, 2352: 747-758, 2002.


Given an ensemble of images {Ii}i=1, 2, . . . , N which have been registered and scaled to the same size r×c, each image Ii can be represented by its pixels as [a1i,a2i, . . . , aMi]T in vector form, where aji(j=1,2, . . . , M) is the intensity of the jth pixel in the ith image and M=r·c is the total number of pixels in the image. A design matrix A∈custom characterM×N can then be generated by stacking vectors Ii (zeroed using the mean value) of all images in the ensemble, thus A=└I1,I2, . . . , IN]. A row vector └aji, aj2, . . . , ajN┘ in A can be seen as an intensity profiled for a particular pixel (jth) across N images. If two pixels come from the same pattern region of the image they are likely to have the similar intensity values and hence have a strong temporal correlation. Note the term “temporal” here need not exactly correspond to the time axis but is borrowed to indicate the axis across different images in the ensemble. Our algorithm tries to find these correlations and segments the image plane spatially into regions of pixels that have similar temporal behavior. We measure this correlation by defining a metric between intensity profiles. A simple way is to use the Euclidean distance, i.e. the temporal correlation between two pixels j and k can be denoted as d(j,k)=i=1N(aji-aki)2

The smaller d(j,k), the stronger the correlation between the two pixels.


In order to decompose the image plane spatially using the temporal correlations between pixels, we run a clustering algorithm on the pixel intensity profiles (the rows of the design matrix A. It will produce clusters of temporally correlated pixels. The most straightforward choice is to employ the K-means algorithm, but it could be any other clustering algorithm. As a result the image plane is segmented into several segments of temporally correlated pixels. This can then be used as a template to segment all images in the training set; and a classifier can be built on features extracted from those segments of all images in the training set.


In order to achieve the training without utilizing counterfeit notes, one-class classifier is preferable. Any suitable type of one-class classifier can be used as known in the art. For example, neural network based one-class classifiers and statistical based one-class classifiers.


Suitable statistical methods for one-class classification are in general based on maximization of the log-likelihood ratio under the null-hypothesis that the observation under consideration is drawn from the target class and these include the D2 test (described in Morrison, DF: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990) which assumes a multivariate Gaussian distribution for the target class (genuine currency). In the case of an arbitrary non-Gaussian distribution the density of the target class can be estimated using for example a semi-parametric Mixture of Gaussians (described in Bishop, CM: Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995) or a non-parametric Parzen window (described in Duda, R. O., Hart, P. E., Stork, D. G.: Pattern Classification (second edition), John Wiley & Sons, INC, New York, 2001) and the distribution of the log-likelihood ratio under the null-hypothesis can be obtained by sampling techniques such as the bootstrap (described in Wang, S, Woodward, W. A., Gary, H. L. et al: A new test for outlier detetion from a multivariate mixture distribution, Journal of Computational and Graphical Statistics, 6(3): 285-299, 1997).


Other methods which can be employed for one-class classification are Support Vector Data Domain Description (SVDD) (described in Tax, DMJ, Duin, RPW: Support vector domain description, Pattern Recognition Letters, 20(11-12). 1191 -1199, 1999), also known as ‘support estimation’ (described in Hayton, P, Schblkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing Systems, 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946-952, 2001) and Extreme Value Theory (EVT) (described in Roberts, S. J.: Novelty detection using extreme value statistics. IEE Proceedings on Vision, Image & Signal Processing, 146(3): 124-129, 1999). In SVDD the support of the data distribution is estimated, whilst the EVT estimates the distribution of extreme values. For this particular application, large numbers of examples of genuine notes are available, so in this case it is possible to obtain reliable estimates of the target class distribution. We therefore choose one-class classification methods that can estimate the density distribution explicitly in a preferred embodiment, although this is not essential. In a preferred embodiment we use one-class classification methods based on the parametric D2 test).


In a preferred embodiment, the statistical hypothesis tests used for our one-class classifier are detailed as follows:


Consider N independent and identically distributed p-dimensional vector samples (the feature set for each banknote) x1, . . . , xN∈ C with an underlying density function with parameters θ given as p(x|θ). The following hypothesis test is given for a new point xN+1 such that H0: xN+1∈C vs. H1: xN+1∉C, where C denotes the region where the null hypothesis is true and is defined by p(x|θ). Assuming that the distribution under the alternate hypothesis is uniform then the standard log-likelihood ratio for the null and alternate hypothesis
λ=supθΘL0(θ)supθΘL1(θ)=supθn=1N+1p(xn|θ)supθn=1N(xn|θ)(1)

can be employed as a test statistic for the null-hypothesis. In this preferred embodiment we can use the log-likelihood ratio as test statistic for the validation of a newly presented note.


1) feature vectors with multivariate Gaussian density: Under the assumption that the feature vectors describing individual points in a sample are multivariate Gaussian, a test that emerges from the above likelihood ratio (1), to assess whether each point in a sample shares a common mean. Consider N independent and identically distributed p-dimensional vector samples x1, . . . , xN from a multivariate normal distribution with mean μ and covariance C, whose sample estimates are {circumflex over (μ)}N and ĈN. From the sample consider a random selection denoted as x0, the associated squared Mahalanobis distance

D2=(x0−{circumflex over (μ)}N)TĈN−1(x0−{circumflex over (μ)}N)  (2)

can be shown to be distributed as a central F -distribution with p and N−p−1 degrees of freedom by
F=(N-p-1)ND2p(N-1)2-NpD2.(3)


Then, the null hypothesis of a common population mean vector x0 and the remaining xi will be rejected if

F>Fa;p,N−P−1,  (4)

where Fa;p,N−p−1 is the upper α·100% point of the F -distribution with (p,N−p−1) degrees of freedom. We can make use of the following incremental estimates of the mean and covariance in devising a test for new examples which do not form part of the original sample when an additional datum xN+1 is made available, i.e. the mean
μ^N+1=1N+1{Nμ^N+xN+1}(5)

and the covariance
C^N+1=NN+1C^N+N(N+1)2(xN+1-μ^N)(xN+1-μ^N)T.(6)

By using the expression of (5), (6) and the matrix inversion lemma, Equation (2) for an N -sample reference set and an N+1′th test point becomes
D2=σN+1TC^N+1-1σN+1,where(7)σN+1=NN+1(xN+1-μ^N),and(8)C^N+1-1=N+1NC^N-1-C^N-1σN+1σN+1TC^N-1N+NN+1σN+1TC^N-1σN+1.(9)

Denoting σN+1TĈN−1σN+1 by DN+1,N2, then
D2=N+1NDN+1,N2{1-DN+1,N2DN+1,N2+N+1}.(10)

So a new point xN+1 can be tested against an estimated and assumed normal distribution for a common estimated mean {circumflex over (μ)}N and covariance ĈN. Employing the log-likelihood ratio (1) for the one-class hypothesis test we derive the test statistic (10) directly. The assumption of multivariate Gaussian feature vectors often does not hold in practice, though we have found it an appropriate pragmatic choice in many applications. We relax-this assumption and consider arbitrary densities in the following section.


2) Feature Vectors with arbitrary Density: A probability density estimate {circumflex over (p)}(x;θ) can be obtained from the finite data sample S={x1, . . . , xN}∈custom characterd drawn from an arbitrary density p(x), by using any suitable semi-parametric (e.g. Gaussian Mixture Model) or non-parametric (e.g. Parzen window method) density estimation methods as known in the art. This density can then be employed in computing the log-likelihood ratio (1). Unlike the case of the multivariate Gaussian distribution there is no analytic distribution for the test statistic (λ) under the null hypothesis. So to obtain this distribution, numerical bootstrap methods can be employed to obtain the otherwise non-analytic null distribution under the estimated density and so the various critical values of λcrit can be established from the empirical distribution obtained. It can be shown that in the limit as N→∞, the likelihood ratio can be estimated by the following
λ=supθΘL0(θ)supθΘL1(θ)p^(xN+1;θ^N)(13)

where {circumflex over (p)}(xN+1;{circumflex over (θ)}N) denotes the probability density of xN+1 under the model estimated by the original N samples.


After generating B sets bootstrap of N samples from the reference data set and using each of these to estimate the parameters of the density distribution {circumflex over (θ)}Ni, B bootstrap replicates of the test statistic λcriti, i=1, . . . , B can be obtained by randomly selecting an N+1′th sample and computing {circumflex over (p)}(xN+1;{circumflex over (θ)}Ni)≈λcriti. By ordering λcriti in ascending order, the critical value α can be defined to reject the null-hypothesis at the desired significance level if λ≦λα, where λα is the jth smallest value of λcriti, and α=j/(B+1).


Preferably the method of forming the classifier is repeated for different numbers of segments and tested using images of banknotes known to be either counterfeit or not. The number of segments giving the best performance is then selected and the classifier using that number of segments used. We found that the best number of segments to be from about 2 to 12 although any suitable number of segments can be used.



FIG. 2 is a schematic diagram of an apparatus 20 for creating a classifier 22 for banknote validation. It comprises:

    • an input 21 arranged to access a training set of banknote images;
    • a processor 23 arranged to create a segmentation template using the training set images; p1 a segmentor 24 arranged to segmenting each of the training set images using the segmentation template;
    • a feature extractor 25 arranged to extract one or more features from each segment in each of the training set images; and
    • classification forming means 26 arranged to form the classifier using the feature information;


      wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set. For example, by using spatio-temporal image decomposition described above.



FIG. 3 is a schematic diagram of a banknote validator 31. It comprises:

    • an input arranged to receive at least one image 30 of a banknote to be validated;
    • a segmentation template 32;
    • a processor 33 arranged to segment the image of the banknote using the segmentation template;
    • a feature extractor 34 arranged to extract one or more features from each segment of the banknote image;
    • a classifier 35 arranged to classify the banknote as being either valid or not on the basis of the extracted features;


      wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes. It is noted that it is not essential for the components of FIG. 3 to be independent of one another, these may be integral.



FIG. 4 is a flow diagram of a method of validating a banknote. The method comprises:

    • accessing at least one image of a banknote to be validated;
    • accessing a segmentation template;
    • segmenting the image of the banknote using the segmentation template;
    • extracting features from each segment of the banknote image;
    • classifying the banknote as being either valid or not on the basis of the extracted features using a classifier;


      wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes. These method steps can be carried out in any suitable order or in combination as is known in the art. The segmentation template can be said to implicitly comprise information about each of the images in the training set because it has been formed on the basis of that information.


However, the explicit information in the segmentation template can be a simple file with a list of pixel addresses to be included in each segment.



FIG. 5 is a schematic diagram of a self-service apparatus 51 with a banknote validator 53. It comprises:

    • a means for accepting banknotes 50,
    • imaging means for obtaining digital images of the banknotes 52; and
    • a banknote validator 53 as described above.


The means for accepting banknotes is of any suitable type as known in the art as is the imaging means.


Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.


It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art.

Claims
  • 1. A method of creating a classifier for banknote validation said method comprising the steps of: (i) accessing a training set of banknote images; (ii) creating a segmentation template using the training set images; (iii) segmenting each of the training set images using the segmentation template; (iv) extracting one or more features from each segment in each of the training set images; and (v) forming the classifier using the feature information; wherein the segmentation template is created on the basis of information from all images in the training set.
  • 2. A method as claimed in claim 1 wherein the information from all images in the training set comprises morphological information.
  • 3. A method as claimed in claim 1 wherein the information from all images in the training set comprises information about a pixel at the same location in each of the training set images.
  • 4. A method as claimed in claim 2, wherein the information from all the images comprises pixel intensity profiles.
  • 5. A method as claimed in claim 1, wherein the segmentation template is created by using a clustering algorithm to cluster pixel locations in an image plane on the basis of the information from all the images in the training set.
  • 6. A method as claimed in claim 1, wherein the classifier is a one-class classifier.
  • 7. A method as claimed in claim 1, wherein the classifier is a statistical one-class classifier.
  • 8. A method as claimed in claim 7 and wherein the step of forming the classifier comprises estimating a distribution of a statistic relating to banknotes in a target class, said target class comprising genuine currency.
  • 9. A method as claimed in claim 1, which further comprises using a feature selection algorithm to select one or more types of feature to use in step (iv) of extracting features.
  • 10. A method as claimed in claim 1, which further comprises forming the classifier on the basis of specified information about a particular denomination and currency of banknotes.
  • 11. A method as claimed in claim 1, which further comprises combining classifiers where necessary in step (v) of forming the classifier.
  • 12. An apparatus for creating a banknote classifier comprising: (i) an input arranged to access a training set of banknote images; (ii) a processor arranged to create a segmentation template using the training set images; (iii) a segmentor arranged to segmenting each of the training set images using the segmentation template; (iv) a feature extractor arranged to extract one or more features from each segment in each of the training set images; and (v) classification forming means arranged to form the classifier using the feature information; wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set.
  • 13. A banknote validator comprising: (i) an input arranged to receive at least one image of a banknote to be validated; (ii) a segmentation template; (iii) a processor arranged to segment the image of the banknote using the segmentation template; (iv) a feature extractor arranged to extract one or more features from each segment of the banknote image; (v) a classifier arranged to classify the banknote as being either valid or not on the basis of the extracted features; wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes.
  • 14. A banknote validator as claimed in claim 13 wherein the information about each of a set of training images comprises morphological information.
  • 15. A banknote validator as claimed in claim 13, wherein the information about each of a set of training images comprises information about a pixel at the same location in each of the training set images.
  • 16. A banknote validator as claimed in claim 13, wherein the information about each of a set of training images comprises pixel intensity profiles.
  • 17. A banknote validator as claimed in claim 13, wherein the classifier is a one-class classifier.
  • 18. A banknote validator as claimed in claim 13, wherein the classifier is a statistical one-class classifier.
  • 19. A banknote validator as claimed in claim 13, which further comprises a plurality of classifiers and a combiner arranged to combine the results of each of the classifiers.
  • 20. A method of validating a banknote comprising: (i) accessing at least one image of a banknote to be validated; (ii) accessing a segmentation template; (iii) segmenting the image of the banknote using the segmentation template; (iv) extracting features from each segment of the banknote image; (v) classifying the banknote as being either valid or not on the basis of the extracted features using a classifier; wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes.
  • 21. A method as claimed in claim 20, wherein said classifier is a one-class classifier.
  • 22. A method as claimed in claim 20, wherein said classifier is a statistical classifier.
  • 23. A computer program comprising computer program code means adapted to perform method of creating a classifier for banknote validation said method comprising the steps of: (i) accessing a training set of banknote images; (ii) creating a segmentation template using the training set images; (iii) segmenting each of the training set images using the segmentation template; (iv) extracting one or more features from each segment in each of the training set images; and (v) forming the classifier using the feature information; wherein the segmentation template is created on the basis of information from all images in the training set.
  • 24. A computer program as claimed in claim 23 embodied on a computer readable medium.
Parent Case Info

This application is a continuation-in-part application of co-pending application Ser. No. 11/305,537 filed Dec. 16, 2005.

Continuation in Parts (1)
Number Date Country
Parent 11305537 Dec 2005 US
Child 11366147 Mar 2006 US