Devices for recognizing, identifying and validating objects such as coins are widely used in coin acceptor and coin rejecter mechanisms and many such devices are in existence and used on a regular basis. Such devices sense or feel the coin or other object as it moves past a sensing station and use this information in a device such as a microprocessor or the like to make a determination as to the genuineness, identity and validity of each coin. Such devices are very successful in accomplishing this. However, one of the problems encountered by such devices is the presence of variations in the same type of coin from batch to batch and over time and other variables including wear and dirt. These will cause changes, albeit small changes in some cases and from one coin type to another including in the U.S. and foreign coin markets. Such changes or variations can make it difficult if not impossible to distinguish between genuine and counterfeit coins or slugs where the similarities are relatively substantial compared to the differences.
The present invention takes a new direction in coin recognition, identification and validation by making use of a weighted error correlation coefficient algorithm. This technology has not been used heretofore in devices for sensing, identifying, recognizing and validating coins such as the coins fed into a vending or like machine. The use of weighted error correlation coefficient algorithm has the advantage over known devices by producing superior results when considering ease of implementation as opposed to more complex pattern recognition methods as it is a relatively transparent and straightforward algorithm, restriction to integer math due to being ultimately coded for a cost-effective embedded target, and ability to recognize data trends while still giving separation due to gross errors. The present invention therefore represents a technology in a coin sensing environment which has not been used in the past.
The method of the present invention utilizes an inclined rail to roll coins and other similar objects, past one or more sensors to sense two or more characteristics of the coin resulting in measurements of parameter of the coin. In accordance with the present invention, a number of features are developed using the measurements. Each resulting feature is identified as to where it fits within its predetermined limits. Each feature is factored with a pre-assigned degree of significance and all are used in a validation algorithm to determine acceptability.
With the present system it is recognized that each different coin denomination will have its own pattern and the same system can be used to recognize, identify and validate, or invalidate, coins of more than one denomination including coins of different denominations from the U.S. and foreign coinage systems.
The novelty of the present invention relates in large part to the signal processing and the method that is used. The signal processing involves extracting features from signals generated during passage of a coin and interpreting these signals in a feature manipulation process. This increases the performance sensitivity without adding new or more complicated sensors.
In a preferred embodiment of the present device utilizes two pairs of coils connected with capacitors to result in two tank circuits with two frequencies, and uses two optical sensors. Furthermore, each coin when magnetically and optically sensed will produce distinctive features that determine their denomination value and metallic authenticity.
The present device includes the sensors, the signal conditioning circuits including the means for controlling the sensors, data acquisition means, feature determination and algorithm implementation. The physical characteristics of the sensors may be of known construction such as shown in Wang U.S. Pat. No. 5,485,908.
Referring now to the drawings in which like reference numbers represent corresponding parts throughout.
Referring to the drawings more particularly by reference numbers, number 20 in
The cluster classifier device 26 has an output on which signals are fed to a comparator circuit 32 which receives other inputs from an ellipsoid shaped raster or area 33. The outputs of the comparator circuit 32 are fed to the switch 28 for applying to the neural network classifier 30. The comparator 23 also produces outputs on lead 34 which indicate the presence of a rejected coin. This occurs when the comparator circuit 32 generates a comparison of a particular type. The decisions are produced on output 36 of the neural network classifier 30.
The signals collected by the sensors are processed by the signal preprocessing. Extraction of the most dominate and salient information about the coin occurs in the feature extraction circuit 24. A feature vector (FV) is formed by combining all of the preprocessed information, and this feature vector (FV) is then fed to the hyper ellipsoidal classifier circuit 26 which classifies the object or coin according to its denomination. If the object or coin is not classifiable by its denomination because it is a counterfeit coin or slug, the classifier circuit will produce an output from a comparator 32 that is used to reject the coin. This is done by producing a signal on lead 34. The classification of the coin takes place in the comparison means 32 which compares the output of the cluster classifier 26 with an ellipsoid shaped output received on another input to the comparator 33.
After all of the neural networks have been trained, and such training is known the subject coin validation system is ready for classification. The signals with their distinctive features are then collected from the unknown object or coin and are formed into the feature vector (FY). The feature vector is first verified to see if it falls within an ellipse as defined by the mathematics of the system. The object or coin is rejected as being counterfeit if its feature vector is found not to fall in any ellipse. Otherwise it is assumed to be a valid coin. If not rejected the object or coin is considered as a candidate and the same feature vector is fed to the neural network and the output levels from the network are compared against each other. The object or coin is again subject to being rejected as counterfeit if the output value of the first neuron level is greater than that of the second neuron level. Otherwise it will be accepted as a valid coin belonging in a predetermined denomination or range of denominations.
Refer now to
Refer now to
Turn now to the
The flow chart of
The amount of difference between the minimum and maximum value and the nominal value for each feature can vary greatly and particularly between other coin types being validated. A coin being considered for validation must produce a value within the minimum and maximum limits on all tested features being tested. At this point, it should be understood that the weighted-error coefficient values for each feature will increment or decrement a change in the level of the nominal feature value in respect to its upper and lower limits for that coin. The weighted-error coefficient value line 176 indicates the relative weight assigned as shown at each feature. For the said two features illustrated thus far in
The other features shown in
To perform coin validation, two key components are required: sensors that capture information about the coin, and a numerical solution for classifying coins based on that information. With new coin validation products, the goal is to improve on preexisting methodologies, usually by incorporating advancements from among the following:
The present invention will show 18 validation features—3 sizing features, and 15 magnetic features. The three sizing features all involve math using multiple sensor readings, and all 15 of the magnetic features are obtained directly from sensor readings. Three of the magnetic features are produced by user-configurable algorithms, whereby an equation is represented by placeholders that represent the features to use as variables, as well as mathematical operators. These features are hereafter referred to as “virtual features”.
The magnetic features consist of 5 readings from 3 separate scans of the coin with the magnetic sensors, called coil A scan, coil B1 (first B) scan, and coil B2 (second B) scan. The first is captured using coil A (120 KHz), and the second and third of which are captured using coil B (16 KHz). The 5 readings are the coil period (time between the first and second successive peaks of the decaying sinusoid), phase (time between the first and nth sampled peaks, where n>2), 2 successive peak amplitudes, and difference between the two peaks (tau), respectively. During coil data collection, 10 peak amplitudes of each scan are obtained, for 30 peaks total. On coil A, due to its high frequency relative to the digitizing speed of the analog-to-digital (ATD) hardware, the peaks sampled are actually just the odd peaks starting with the third (peaks 3, 5, 7 . . . 21). The coil B peaks are sampled are every peak starting with the second (peaks 2 through 11).
Algorithm Details on “Size”:
Symmetry
This is the ratio of the optic blocking/unblocking times, giving not only an indication of the diameter of the coin, but exhibiting more distribution for coins that are sided/asymmetric (more so than the Optical size calculation). It is calculated using the formula:
where:
This feature is a ratio of the coil A magnetic detection time versus the total optic blocking time. The magnetic detection time is the time the coil A peak amplitude first varies by 100 or more millivolts from air to when it is back within 100 millivolts of the air reading (this is configurable). It is calculated using the formula:
where:
This feature is dependent on the thickness and permeability of the metallic material being measured, as well as proximity of the coil to the coin.
Coil A, B1, and B2 Period
This feature is the time between two successive phase-detect crossings by the coil validation hardware. The phase-detect (aka zero-cross/DC cross comparator) circuitry provides a signal to an HC12 (a microcontroller manufactured by Freescale Semiconductor) input capture timer, which is used to not only determine the frequency the tank is oscillating at, but synchronizes ATD peak sampling. A single period is used as a feature due to the tight distribution it exhibits for like coins.
Notes: This feature is in units of HC12 timer counts, which is operating at a bus frequency of 24 MHz. Thus each period count corresponds approximately to 41.6 nanoseconds.
This feature is air-reading compensated for temperature normalization purposes.
Coil A, B1, and B2 Phase
This feature is the time between the phase-detect crossing at the first peak sample acquisition and the last sample acquisition. This feature is used as it gives a very sensitive indication of the magnetic permeability of the coin (which corresponds to the impedance of the tank, or how the coin disturbs the mutual inductance of the opposing coils). It is has the broadest distribution of the magnetic features for like coins, but is often useful in providing more separation between dissimilar coins.
Notes: This feature is in units of HC12 timer counts, which is operating at a bus frequency of 24 MHz. Thus each period count corresponds approximately to 41.6 nanoseconds.
This feature is air-reading compensated for temperature normalization purposes.
Coil A, B1, and B2 Amplitudes
While up to 10 peak amplitudes are collected for every coin, only 2 are used for validation. These 2 are independently selectable per scan, but currently must be successive, i.e., peaks 1 and 2, or peaks 8 and 9, etc. They should be selected for their ability to aid in distinguishing dissimilar coins during tune development.
Notes:
These features are in units of HC12 ATD counts. As it is a 10-bit ATD, each count corresponds to approximately 5 millivolts.
Two peaks are used because it also embeds some characteristic of the different decay rate of the coil signal for dissimilar coins.
Not all 10 peaks are always obtained, especially for ferro-magnetic coins (the fewest ever obtained has been observed to be 2). Typically, only 3 to 5 peaks are obtained for more magnetizable coins.
Coil A, B1, and B2 Tau (User Configurable Features)
These 3 features are placeholders for virtual features. Currently, they are simply the difference between the 2 peaks selected for validation, which gives a characteristic of the decay rate of the signal. This feature has been exhibited to have a much tighter distribution than the peak amplitudes themselves—i.e., when 1 peak is offset for a like coin during a successive scan, the other peak will maintain a virtually constant ratio with the first peak.
After the data is conditioned, it is compared to various nominal feature vectors, some comprising valid coins, and others invalid slugs. Whichever produces the highest passing correlation result while passing its respective minimum correlation score is assumed the pattern match.
The method utilized for performing pattern recognition in this application is a novel weighted-error correlation algorithm. This algorithm was developed as a direct result of researching various pattern recognition methodologies, which were comprised of various statistical data classification algorithms, as well as BMP and SOFM ANNs.
Weighted Error Correlation
The significance of the correlation coefficient is that it is an indicator of how well two data vectors follow the same trend by performing a least sum-of-squares regression line slope comparison via a moment product. In the task of coin validation, the data vectors being correlated are the nominal coin data versus the collected coin data. A coefficient of 1 indicates that the correlated vectors have parallel regression lines. A coefficient of 0 indicates that the vectors are independent, and a coefficient of −1 indicates that the vectors are orthogonal; i.e., their regression lines are perpendicular. The algorithm for calculating the two-dimensional Pearson's Correlation Coefficient is as follows:
Where:
r is the correlation coefficient, which ranges from −1 to 1,
N is the number of data points (samples) being correlated,
X and Y are N-dimensional data arrays.
The correlation coefficient has some analytical deficiencies denoted by the following:
These are issues inherent with the correlation coefficient calculation, but due to the nontrivial nature of the data being analyzed in this application, are non-problematic.
A desirable feature of the correlation coefficient is that the trend of the data (that is, their respective ratios) is as important as the data itself. E.g., if two data vectors are separated by a constant offset but follow an identical trend, then the correlation coefficient would still indicate that those vectors are identical. This also holds true for the weighted-error algorithm when utilizing identical weights for all the features.
The equation for a prior weighted correlation coefficient algorithm for the purpose of contrasting with the weighted-error correlation coefficient algorithm is as follows:
Where:
W is an N-dimensional data array.
The algorithm for the weighted error correlation coefficient is as follows:
wi=(Xi−Yi)*Wi
xi=Xi+wi
yi=Yi−wi
Linguistically, the difference between the original algorithm and the weighted-error algorithm is that each point error (the difference between each X and Y data pair) is symmetrically added and subtracted from the original data pair to scale their divergence based on the weighting. Scaling both the X and Y vectors is done for the sake of symmetry and efficiency using integer math; an identical effect could be obtained by scaling one vector by twice as much, or a similar effect garnered by scaling just one vector by the error times the weight.
Thus for a weight array of all 0's, it is obvious that the weighted error correlation corresponds exactly to the original Pearson's correlation coefficient calculation. Nonzero weights magnify the separation between the datum commensurate with that weight's index, thus conferring greater impact to the correlation result. Once weights are utilized, the import of the correlation coefficient is no longer as an indication of similarity, orthogonality, or independence, but strictly as an indicator of data vector trend/sample similarity. It then becomes a scoring method that not only defines data interdependency, but also takes data trending into account, which is synonymous with pattern recognition. Note that the weights are virtually independent—i.e., modifying a weight does not significantly affect the correlation results of the other datum with respect to their weights; i.e. the changes in coefficient results are more additive in nature than when utilizing the typical weighted correlation algorithms. The results aren't purely additive due to the coefficient result modeling the hyperbolic tangent function, and it is thus bounded between two values (−1 and 1), but the linear region still yields much potential for superposition of cumulative error. If the weights are kept at the same value for all the samples, similarly trending vectors still possess high correlation. Another significant aspect of the weights is that as they positively increase for a particular data point, the less deviation from the nominal trend is “tolerated” at that point. Weight values of note are as follows:
This method is dissimilar to any existing weighted correlation algorithm, since it was developed to produce superior results when considering ease of implementation, restriction to integer math (due to being ultimately coded for an embedded target), and ability to recognize data trends while still giving separation due to gross errors.
To give some illustrative examples, given a data vector X={0, 100, 200, 300, 400, 500}, a data vector Y={2, 128, 204, 302, 421, 501}, and a weight vector W={5, 5, 5, 5, 5, 5}, the Pearson's correlation coefficient is equal to 0.998 (note the weight vector is meaningless for this calculation), and the weighted-error correlation coefficient is equal to 0.787. Changing the Y vector to Y={100, 200, 300, 400, 500, 600} yields 1 and 1, respectively, and changing the Y vector to Y={−5, 133, 205, 332, 439, 468} yields 0.989 and 0.193, respectively.
Pattern Recognition Algorithm Selection Explication
For the present invention, the pattern recognition tool chosen was weighted-error correlation. This is due to the following reasons:
There are a host of other reasons, but these are by far the most important. SOFM would be a fine validation method using the classical validation methodology, but one of its main detractors is that it tries to make an exact science of an art form, which is not without consequences in a discipline where validating coins and rejecting slugs demands flexibility, simplicity, and adaptability. In any case, the numerical solution is only as good as the information obtained from the sensors.
Continuous Scanning Validation
Continuous scanning places some strict hardware requirements on the operation of the magnetic sensor circuitry. In order to perform continuous scanning, the frequencies being used must be high enough to allow for sufficient over sampling to occur within the validation window. The electronics also need to perform several main tasks in this project given certain bandwidth limitations. The magnetic sensors consist of a pair of inductively coupled wound coils—that possess separate windings—that provide the inductive portion of two separate tank circuits using the same wound inductor. One possesses a natural frequency of 64 kilohertz, and the other resonates at a natural frequency of 200 KHz. Thus all the integrated circuits comprising the electronics must accommodate this bandwidth. The coils are also oriented to be magnetically opposing. This configuration aids in detecting a change in the coin gap, since the flux coupling between the coils will vary with a different air gap between them, as opposed to a single uncoupled coil configuration.
The tank circuit is activated by charging the tank capacitor, and then discharging it through the inductors and resistor. One crucial task is determining an optimal tank circuit charging time, such that unnecessary delay is eliminated and maximal stability is achieved.
As a coin passes between the coils, it influences the flux linkage based on the natural frequency of the tank circuit and the impedance of the coin itself. The higher the resonant frequency of the tank, typically the less deep the imparted flux penetrates the material of the coin. Thus high frequencies impart information as to the magnetic/electrical properties of the coin's surface material, and low frequencies give a more bulk material reading.
To digitize the frequency and amplitude response of the tank circuit, some additional circuitry is required beyond the native capabilities of the microcontroller. In order to obtain the frequency shifts of the 200 and 64 KHz signals and also synchronize sampling of the peaks of the 64 KHz signal, phase detect circuits are used. It is comprised of a comparator with its negative input set to a low pass filter reference—whose input is the coil signal—and its positive input connected to the coil signal, with approximately 50 millivolts of hysteresis across the references to eliminate glitches due to signal noise. As a general rule, sampling the peaks of a sinusoidal waveform directly with a 10-bit analog-to-digital converter (ATD) is possible with reasonable accuracy as long as the ATD sampling capacitor charge time is one-eighth or less the period of the signal. In this application, the ATD clock is 2 MHz, and the 9S12 takes 2 ATD clocks to charge the sampling capacitor, which corresponds to a sampling time of 1 microsecond (1 MHz). This is more than adequate to sample the peaks of the 64 KHz signal.
Software Explication—Continuous Scanning Coin Validation
When the coin breaks the first optic, continuous scanning is initiated at the two frequencies of interest, with each successive scan alternating between the two frequencies. During scanning, 3 features are obtained: the high frequency signal period, and the low frequency signal period and amplitude. Each feature is accumulated in a separate data buffer for each scan. Scanning ends when the second optic becomes blocked and the first optic is unblocked, or when the first optic becomes unblocked and the second optic is blocked. Coins smaller than the optic gap result in the first case, and larger coins result in the latter. This data collection cutoff serves to eliminate unnecessarily redundant data collection due to coin symmetry unless it is desirable to better ascertain the diameter of the coin magnetically. Another beneficial result of this approach is that extra time is garnered for performing coin validation, in the event some coin sorting action is required soon after the coin leaves the second optic.
After the data is collected, it undergoes two conditioning steps. First, the three data buffers are decimated (down sampled) in order to compensate for coin speed variation, which ensures that successive validation data buffers contain samples that correspond to similar coin position acquisition intervals. Secondly, the data is normalized, which compensates for hardware/temperature variation in the validation hardware. This can be performed either via air data compensation—the preferred implementation—or via fixed remapping to an arbitrary range (normalization).
It has been satisfactorily demonstrated that the tank circuit response for a given coin with respect to air readings for a given unit maintains a constant ratio across a wide temperature range (0 to 150° F.), and only fails in temperatures where component thermal ratings are exceeded. It is further postulated that normalization will compensate for unit hardware variation in tank circuit response.
After the data is conditioned, it is compared to numerous sets of nominal feature vectors, with 3 feature vectors per set, some comprising valid coins, and others possibly invalid slugs. Whichever produces the highest passing correlation result while passing its respective minimum score is assumed the pattern match.
Software Explication—Coil Calibration and Coin Tuning
To perform coil calibration, it is first necessary to understand the nature of the coil response, which is an exponentially decaying sinusoid. In order to qualitatively ascertain the full nature of how a coin affects this signal, it is necessary to capture both the change in amplitude envelope and frequency response. This is accomplished via phase detect circuitry, which also aids in synchronizing ATD samples to coincide with the signal peaks. When the phase shift and peak amplitudes are captured, the original signal can be reconstructed in its entirety. For the purpose of coil calibration all that is required is simply to reconstruct the decay envelope of the sinusoid, which is represented by the following function:
where:
x is the sample acquisition interval.
y is the resultant amplitude.
A is the amplitude envelope coefficient, which is indicative of the minimum-to-maximum amplitude delta.
B is the decay rate coefficient (which is inverted for convenience). This is indicative of the time it takes for the signal to approach its limit.
C is the amplitude offset coefficient, which denotes the DC level of the signal.
Calibration is performed by characterizing the captured coil signals at various points of interest (i.e., reference “keys”) for the purposes of modeling the entire response range of the coils. These reference points are preferably selected to be near the extreme ends and center of the response range. Characterization is performed using iterative curve-fitting, which finds the A, B, and C parameters that result in the target signal at each reference point. Once the parameters are found, an additional curve fitting process is performed upon the parameters separately to model the curves for each parameter. Thus, the response for each coin lies somewhere on these independent parameter curves.
If a sensor response is linear, then only 2 references are required in order to model the entire range. In this case, the coil response is obviously nonlinear, but as is apparent from the above equation, it is easily modeled using just 3 coefficients and the signal frequency. What further simplifies the process is the fact that the DC offset coefficient (aka “C” parameter) remains constant for the entire response range for a given unit and ambient temperature. Thus once the C parameter is obtained, only the subsequent A, B and frequency reference parameters vary.
After the response range is characterized, the coil response for a given coin is captured and characterized. Then the ratio of the coin parameters to the reference points is used to interpolate the coil response for any characterized unit, assuming the ratio can be extrapolated from historical tabulated characterization results.
ANN—artificial neural network. Neural networks are programs that perform pattern recognition after a training process that utilizes various statistical numerical analysis techniques.
BMP—back-propagation multilayer perception, a supervised-learning ANN that must be provided the output in order to map the inputs. It is typified by randomly adjusting the “neuron” weights, and then iteratively checking for reduction in the squared error between the calculated and actual outputs. Increasing orders of neurons are utilized in order to perform more and more complex classification tasks.
cluster—a grouping of features that have been “perceived” via statistical or neural analysis to possess relatively high dependency for use in pattern recognition/rejection. Feature clusters can also be identified using covariance and/or cross-correlation between desirable and undesirable feature databases.
feature—in the field of statistical and neural pattern recognition, a feature is data that represents a one-dimensional object (typically the numerical output of a sensor) used as an input for pattern recognition, often in conjunction with other features. The same feature may also be accumulated to provide multidimensionality for the purpose of pattern recognition, usually over time.
key—for the purposes of calibration, an object used to provide a reference characteristic. In coin acceptor magnetic sensor calibration, this is often either a coin that produces a desired response mounted in an appropriate fixture, or a metallic strip that is inherently a fixture, or even the “natural” response when at rest.
neuron—in many neural network methodologies, the number of neurons corresponds to the number of input and output weights.
SOFM—self-organizing feature map, an unsupervised learning ANN that uses data clustering algorithms to map high-dimensioned data vectors to a lower dimensional feature space. SOFMs are completely dissimilar to other neural network implementations such as BMPs, and do not utilize “neurons”.
tune—a collection of nominal coin feature values and validation parameters used as the basis for coin identification, obtained through rigorous data collection and analysis.
weight—a value that is used to define feature dependence or relevance in pattern recognition.
validation window—the absolute maximum time that can elapse during data collection and classification.
WEC—Weighted Error Correlation.
The forgoing description of the preferred embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by the details of the embodiments presented in this description. The above specification, examples, and data provide a complete description of the manufacture and use of the invention. Many embodiments of the invention can be made without departing from the spirit and scope of the invention.
The present invention claims priority to U.S. Provisional Patent Application No. 60/862,351, filed Oct. 20, 2006. The contents of said application are incorporated herein by reference.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2007/022477 | 10/22/2007 | WO | 00 | 10/25/2010 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/051537 | 5/2/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5797475 | Bointon et al. | Aug 1998 | A |
6640955 | Furuya | Nov 2003 | B1 |
6886680 | King | May 2005 | B2 |
Number | Date | Country | |
---|---|---|---|
20110023596 A1 | Feb 2011 | US |
Number | Date | Country | |
---|---|---|---|
60862351 | Oct 2006 | US |