The invention relates to counterfeit detection, embedded signaling in printed matter for authentication, and digital watermarking.
The advances in digital imaging and printing technologies have vastly improved desktop publishing, yet have provided counterfeiters with lower cost technologies for illegally counterfeiting security and value documents, like identity documents, banknotes, checks, etc. While there are many technologies that make counterfeiting more difficult, there is a need for technologies that can quickly and accurately detect copies. Preferably, these technologies should integrate with existing processes for handling the value documents. For example, in the case of value documents like checks, there is a need for copy detection technology that integrates within the standard printing and validation processes in place today. Further, as paper checks are increasingly being scanned and processed in the digital realm, anti-counterfeiting technologies need to move into this realm as well.
One promising technology for automated copy detection is digital watermarking. Digital watermarking is a process for modifying physical or electronic media to embed a hidden machine-readable code into the media. The media may be modified such that the embedded code is imperceptible or nearly imperceptible to the user, yet may be detected through an automated detection process. Most commonly, digital watermarking is applied to media signals such as images, audio signals, and video signals. However, it may also be applied to other types of media objects, including documents (e.g., through line, word or character shifting), software, multi-dimensional graphics models, and surface textures of objects.
In the case of value documents, digital watermarking can be applied to printed objects for copy detection. In some applications, the digital watermarking techniques can be generalized to auxiliary data embedding methods that can be used to create designed graphics, features or background patterns on value documents that carry auxiliary data. These more general data embedding methods creates printable image features that carry auxiliary data covertly, yet are not necessarily invisible. They afford the flexibility to create aesthetically pleasing graphics or unobtrusive patterns that carry covert signals used to authenticate the printed object and distinguish copies from originals.
Auxiliary data embedding systems for documents typically have two primary components: an encoder that embeds the auxiliary signal in a host document image, and a decoder that detects and reads the embedded auxiliary signal from a document. The encoder embeds the auxiliary signal by subtly altering an image to be printed on the host signal or generating an image carrying the auxiliary data. The reading component analyzes a suspect image scanned from the document to detect whether an auxiliary signal is present, and if so, extracts information carried in it.
Several particular digital watermarking and related auxiliary data embedding techniques have been developed for print media. The reader is presumed to be familiar with the literature in this field. Particular techniques for embedding and detecting imperceptible digital watermarks in media signals are detailed in the assignee's U.S. Pat. Nos. 6,614,914 and 6,122,403, which are hereby incorporated by reference.
This disclosure describes methods for using embedded auxiliary signals in documents for copy detection. The auxiliary signal is formed as an array of elements selected from a set of print structures with properties that change differently in response to copy operations. These changes in properties of the print structures that carry the embedded auxiliary signal are automatically detectable. For example, the changes make the embedded auxiliary signal more or less detectable. The extent to which the auxiliary data is detected forms a detection metric used in combination with one or more other metrics to differentiate copies from originals. Examples of sets of properties of the print structures that change differently in response to copy operations include sets of colors (including different types of inks), sets of screens or dot structures that have varying dot gain, sets of structures with different aliasing effects, sets of structures with different frequency domain attributes, etc.
Further features will become apparent with reference to the following detailed description and accompanying drawings.
The following sections describe various automated techniques for distinguishing a copy from an original.
Auxiliary Signal Generation, Embedding and Detection
The auxiliary data signal carries a message. This message may include one or more fixed and variable parts. The fixed parts can be used to facilitate detection, avoid false positives, and enable error measurement as an authentication metric of the printed article. The variable parts can carry variety of information, such as unique identifier (e.g., serving to index relating data in a database), authentication information such as data or feature metrics (or hash of same) on the printed object, and error detection information computed as a function of the other message elements.
The auxiliary signal generator of
Some form of geometric synchronization signal may be formed with the auxiliary signal at this stage or subsequent stages. One example is formation of the signal such that it has detectable registration peaks in a transform domain, such as a spatial frequency domain, convolution domain and/or correlation domain.
As part of the signal generation process, the auxiliary signal generator maps the elements of the signal to spatial locations of a target print object (104). These locations form a tiled pattern of rectangular arrays, such as the small array shown in
At this point, the auxiliary signal comprises an array of binary or multilevel values (i.e. more than two binary states) at each spatial location. For the sake of explanation, we will refer to these locations as embedding locations.
Next, the signal generator selects a print structure for each embedding location (106). One can consider the signal value at the embedding location as an index to the desired print structure. This print structure may be selected from a set of possible print structures. One simple set for the binary state is the presence or absence of an ink dot, line or other shape.
Another example is a first structure at dot pattern or halftone screen 1 and a second structure at dot pattern or screen 2 as shown in
More examples include a structure that has aliasing property 1, and a structure that has aliasing property 2. As in the case of different colors or dot gains, the difference in the aliasing property due to copying can alter the embedding location's appearance and either make the embedded signal more or less detectable.
As explained in further detail below, these structures can be selected so that they have measurable values, such as luminance, intensity, or some other characteristic, that diverge or converge in response to a copy operation. Combinations of these structural features may be combined to make the divergence or convergence more dramatic. In addition, combinations of these features may be used to represent multiple auxiliary signal states at the embedding location.
The example shown in
After selecting the desired print structures for each embedding location, the result is an image that is ready for printing. The print structures may be designed and specified in a format that is compatible with the type of printer used to print the image on a substrate such as paper, plastic, etc. Many printers require that image or other data be formatted into an image compatible for printing on the particular printer in a process called RIP or Raster Image Processing. This RIP transforms the input image into an image comprised of an array of the print structures compatible with the printer hardware. These print structures may include line screens, halftone dots (clustered dots), dither matrices, halftone images created by error diffusion, etc. Our implementation may be integrated with the RIP to create an image formatted for printing that has the desired print structure per embedding location. Alternatively, it may be designed to be ready for printing such that the RIP process is unnecessary or by-passed.
As an alternative to selecting print structures in block 106, the auxiliary signal generator may produce an array of values that specify a change to the print structure of a host image into which the auxiliary signal is to be embedded. For example, the array of values in
In block 108, a printer prints the resulting image on a substrate. This produces a printed document (110). The term “document” generally encompasses a variety of printed objects, including security documents, identify documents, banknotes, checks, packages, etc. or any other type of printed article where copy detection is relevant.
The bottom of
The copy operation is expected to make certain aspects of the printed image change, and copy detection process of
The authentication process starts with the authentication scan of the printed image (120). The quality of this scan varies with the implementation. In some cases, it is an 8 bit per pixel grayscale value at particular resolution such as 100 to 300 dpi. In other cases, it is a binary image. The parameters of the auxiliary signal and other copy detect features are designed accordingly.
The process of extracting the auxiliary signal is illustrated in blocks 122 to 128. An auxiliary signal reader begins by detecting the synchronization signal of the auxiliary signal. For example, it detects transform domain peaks of the synchronization signal, and correlates them with a known synchronization pattern to calculate rotation, scale and translation (origin of the auxiliary signal). Examples of this process are described in application Ser. No. 09/503,881. and U.S. Pat. Nos. 6,122,403 and 6,614,914.
As shown in
The next step in extracting the message from the auxiliary data is estimating the signal elements (124). The reader looks at the synchronized array of image values and estimates the value of the auxiliary signal at each embedding location. For example, in the case where the auxiliary signal is embedded by adjusting the luminance up or down relative to neighboring locations, the reader predicts the value of the auxiliary signal element by comparing the luminance of the embedding location of interest with its neighbors.
Next, the reader performs the inverse of the transform with the carrier to get estimates of the error correction encoded elements (126). In the case of a spreading carrier, the reader accumulates the contributions of the auxiliary signal estimates from the N embedding locations to form the estimate of the error correction encoded element. The reader then performs error correction decoding on the resulting signal to extract the embedded message (128). This message can then be used to provide further copy detect metrics, referred to as code metrics in
In addition to the metrics based on the embedded auxiliary signal, the reader also computes metrics based on other features on the printed object (130). Some examples include analysis of aliasing of certain structures, frequency domain analysis of certain structures that change in a predictable way in response to a copy operation, analysis of fonts on the printed object to detect changes in the fonts due to copying or swapping operations, etc. All these metrics are input to a classifier 132 that determines whether the metrics, when taken as a whole, map to a region corresponding to a copy or to a region corresponding to an original.
One form of classifier is a Bayesian classifier that is formulated based on a training set of copies and originals. This training set includes a diversity of known originals and copies that enables the regions to be defined based on a clustering of the metrics for the originals (the region in metric space representing originals) and the metrics for the copies (the region in metric space representing copies). The training process computes the metrics for each copy and original and maps them into the multi-dimensional metric space. It then forms regions for copies and originals around the clustering of the metrics for copies and originals, respectively.
In operation, the classifier maps the metrics measured from a document whose status is unknown. Based on the region into which these metrics map, the classifier classifies the document as a copy or an original. More regions can be created in the metric space if further document differentiation is desired.
Having described the entire system, we now describe a number of specific types of print structures that can be used to embed the auxiliary signal, or that can be used independently to create copy detect features.
One such process is to embed a digital watermark into a block of midlevel gray values, threshold the result to binary values per embedding location, and then insert the desired print structure and property (e.g., line structure, screen, color, etc.) per embedding location based on the auxiliary signal value at that location.
In the case of a line continuity method of
As noted, the differences in the way these print structures respond to copy operations make the embedded digital watermark more readable or less readable We use the term “nascent watermark” for a digital watermark that becomes more detectable after a copy operation. We use the term “fragile watermark” watermark for a watermark that becomes less detectable after a copy operation. While the varying responses of the print structures are useful tool for constructing an embedded machine-readable signal, such as a nascent or fragile watermark, they can also be used as measurable feature metrics apart from the embedded signal. For example, they can be used as separate print features that are measured in block 130 of
In the next sections, we will discuss a number of print structures generally. They may be used as embedded signal elements and independent print features.
Colors
As noted above with reference to
One way to use colors for copy detection is to select out of gamut inks for use in one or more of the print structures. A color gamut defines a range of colors. Different color schemes (e.g., RGB and CMY) generally include a unique color gamut. Such color schemes will most certainly have overlapping color gamuts (or ranges), and unique (or out of gamut) color ranges.
Inks or dyes can be selected that lie outside of a color gamut of a capture device (e.g., an RGB scanner) used in typically copying operations, yet fall within the gamut of the authentication scanner (e.g., panchromatic scanner). Consider a document that is printed with some dark blues and violets in the CMYK space, which are out of gamut for the RGB space. When a scanner scans the CMYK document, it typically scans the image in the RGB space. The RGB scanning loses the dark blues and violets in the conversion.
This approach extends to color gamuts of printers used in counterfeiting as well. Inks can be selected that fall outside the typical gamut of CMYK printers likely used by counterfeiters.
Another approach is to use a metameric ink for one or more of the print structures. These inks look different to different types of scanners and/or lighting conditions, and therefore, lead to detectable differences in the scanner output. The differences between the authentication scanner output and the counterfeiter scanner output provide detectable differences between copies and originals. Thus, these inks are candidates for use in the print structures.
Another approach is to mix different amounts of black in one of a pair of colors. An authentication scanner that has better sensitivity to these differences will represent the differences in colors more accurately than a typical RGB scanner. The change in the luminance difference between these two colors in response to a copying operation provides another copy detection feature for the print structures.
Another approach is to use a pair of colors where one color is persistent in black and white image scans, while the other is not. Again, the change in the differences between these colors in response to a copying operation provides another copy detection feature for print structures.
Dot Gain
Above with reference to
The prior example described how the difference in dot gain of two print structures due to copying corresponds to measurable differences in luminance of the two structures. This difference in luminance can make be used to make a nascent or fragile embedded signal carried in the luminance difference value. It can also be used on other print structures as well.
In grayscale images, this shift in luminance causes a shift in the gray level values. In binary images, the gray levels are thresholded to black and white. As such, some binary pixels that were once thresholded to white, now are thresholded to black. This shift can be measured using a strip of a gradient with varying grayscale values. The point at which this gradient is thresholded to black shifts due to the dot gain.
A number of different structures can be constructed to detect the shift in luminance. These include:
A first example technique exploits the differences in dot gain in original vs. copied documents. The original document is printed with areas at a first screen, and a second screen. Due to the fact that dot gain has a greater effect on dots at one screen than the other, the differences in dot gain of these areas distinguish between originals and copies. These differences in dot gain can be used to carry a hidden signal that becomes machine readable in copies.
A variation on this technique includes using an area of a solid tone, and an area of a gradient tone. Again, if these areas are printed using different screens with different sensitivities to dot gain, the dot gain caused in the copy operation impacts these areas differently.
The areas of the first and second screens can be randomly distributed throughout the document according to a secret key that specifies the spatial mapping of the screen types to document locations.
Aliasing Effects
The aliasing of certain print structures provides a copy detection feature for print structures. Typical printers used in counterfeiting cannot accurately reproduce certain types of printed structures because these structures are difficult to screen or dither. One such structure is a curved line, or sets of curved lines (e.g., concentric circles).
These curved structures are difficult to screen or dither at different angles. As a result, the lines are smeared in a counterfeit document. This effect is observable in the frequency domain. Preferably, these line structures are tightly spaced (e.g., have much high frequency content).
In some cases, one must implement the copy detect reader for images that are scanned at a relatively low resolution. Given this constraint, the high frequency content is likely to alias in both the original and copy. However, the copy will not have as much energy as the original in areas of the frequency domain where the aliasing occurs. This can be detected using the technique illustrated in
As noted,
These line structures can be combined with embedded signaling methods (such as nascent or fragile watermarks) by using line continuity modulation to embed an auxiliary signal in the line structure. The embedding locations can be represented as the presence vs. absence of a line, or a line of one color vs. a line of a different color per above to represent the states of the embedded signal.
Another use of the embedded signal is to use the geometric synchronization that it provides to locate and align the print structure used to measure aliasing effects.
In the case where the embedded signal is carried in line structures, frequency metrics can be used to measuring energy peaks and/or aliasing effects that occur on the line structure and differentiate copies and originals. The horizontal line structures of
Frequency Domain Features
In addition to measuring aliasing effects in the frequency domain, other structures such as line structures or shapes can be measured in the frequency domain. Further, these structures can be combined with the color or dot gain effects to derive copy detection metrics.
One example combines line structures with pairs of colors. One line structure has a first color and translates into detectable frequency domain feature such as peak or set of peaks. A second line structure has a second color and translates into a different frequency domain feature such as different peak or set of peaks. The shift in the color differences can be measured in the frequency domain by measuring the relationship between these peaks.
Another example is to use a feature like the synchronization signal of the embedded signal that forms a set of peaks at known locations in the frequency magnitude domain. Two or more these signals can be embedded at using two or more different colors. The peaks of each are then detected and compared relative to each other to detect the shift in color that occurs during a copy operation.
Compression Effects
If the scanned image for authentication is expected to be compressed, certain print features can be designed that have varying susceptibility to compression, such as JPEG compression. Differences in the way copies and originals respond to compression can then be detected and used as another metric.
Additional Embedded Signal Metrics
As an additional enhancement, the print structures that form one of the states of the embedded signal (e.g., the 0 state in the embedding locations shown in
Other Feature Analysis
Additional print features such as fonts can be used to detect copies. Special fonts may be used that are difficult to reproduce. The changes in the fonts can be measured and used as additional print feature metrics.
In addition, fraudulent entries can be detected comparing differences in font types on different parts of the document. For checks and other financial instruments, certain hot spots are more likely to be changed than other static areas. These include areas like the text representing the name, amount, and date on the check.
System Design
In this section, we describe a system design that uses digital watermarking and other security features to analyze the authenticity of printed objects based on images scanned of a suspect printed object. In this system, an image printed on the document carries a robust digital watermark, one or more other watermarks (e.g., fragile watermarks and variable-high payload watermarks) and other security features. In some applications, these features are printed on the document at different times, such as when blank versions of the document are printed, and later when variable data is printed onto the blank document. To authenticate the document, a reader analyzes a digitized image captured from a suspect document, measures certain authentication metrics, and classifies the document as an original or counterfeit (or type of counterfeit) based on the metrics. This system, in addition, uses an auxiliary data carrier, namely the robust watermark, to convey auxiliary information that assists in classifying the document.
The robust watermark is designed to survive generations of printing and scanning, including lower resolution scans and conversion to a binary black-white image. It carries auxiliary data to set up the classifier. Examples of this classifier data include:
1. an identification of the document type (which may implicitly convey the corresponding security features, their corresponding metrics, and thresholds for the metrics (e.g., an index to a document design table);
2. an identification of security features on the document (including, for example, types of inks, gradients, frequency domain features, artwork, etc.);
3. an identification of the location of the security features on the document (e.g., relative to some reference location, such as a visible marking or structure on the document, or some invisible orientation point provided by the robust watermark);
4. an identifications of the metrics to be used by the classifier;
5. an identification of the thresholds or other parameters of the metrics used by the classifier. These thresholds are used to distinguish between categories of documents, such as the original and different types of fakes (fakes made by photocopier, by laser printer, by ink jet printer, etc.)
6. an identification of the classifier type (e.g., an index to a look up table of classifier types).
7. an identification of the training data set for the classifier.
In this system, the robust watermark provides geometric synchronization of the image. This synchronization compensates for rotation, spatial scaling, and translation. It provides a reference point from which other features can be located.
The robust watermark may be printed on the front, back or both sides of the document. It may form part of the background image over which other data is printed, such as text. It may be hidden within other images, such graphic designs logos or other artwork, printed on the document. It may occupy the entire background, or only portions of the background. In the case of value documents like ID documents, securities, and checks, the robust watermark may provide geometric and other parameter reference information for other watermarks carried in images printed on the document. For example, the robust watermark can provide a geometric reference for a high capacity, variable message carrying watermark that is printed along with the robust watermark or added at a later time.
In some applications, the variable message-carrying watermark carries data about other variable data printed on the document. Since this other data is often printed later over a pre-printed form (e.g., a check with blank payee, amount and date sections), the high capacity, variable message carrying watermark is printed on the document at the time that the variable text information is printed. For example, in the case of checks, the variable information printed on the check such as the amount, payee, and date are added later, and this data or a hash of it is embedded in the high capacity watermark and printed at the same time as the variable information.
The line continuity modulation mark may be used to carry this variable data. To increase its payload capacity, it does not have to include geometric synchronization components. This synchronization information can be carried elsewhere. This mark need only carry the variable message information, possibly error correction coded to improve robustness. The line structure itself can also be used to provide rotation and scale information. The reader deduces the rotation from the direction of the line structures, and the scale from the frequency of the spacing of the line structures. The origin or translation of the data can be found using the robust watermark printed elsewhere on the document, and/or using a start code or known message portion within the watermark payload.
Alternatively, another high capacity mark or digital watermark may be printed in an area left blank on the document surface to carry variable data corresponding to the variable text data printed on the check. For authentication, the two can be compared to check for alteration in the variable data printed on the check vs. that stored in the high capacity mark.
The use of the robust watermark on both sides of the document provides at least two additional counterfeit detection features. First, the presence of the robust watermark on both sides of the document (e.g., check or other value document) verifies its authenticity. Second, the geometric registration between the robust watermarks on the front and back provides another indicator of authenticity. In particular, the geometric reference point as computed through geometric synchronization of the robust watermark on each side has to be within a desired threshold to verify authenticity.
The robust watermark may also provide an additional metric to detect alterations on certain hot spots on the document. For example, consider a check in which the robust watermark is tiled repeatedly in blocks across the entire check. When variable data is printed on the check at certain hot spots, such as payee name, amount and date, this printing will reduce detection of the robust watermark in the hot spots. Fraudulent alteration of these hot spots (e.g., to change the amount) creates a further reduction in the measurement of the watermark in these hot spots. The reader can be designed to measure the robust watermark signal in theses areas as a measure of check alteration.
The robust watermark may also carry additional variable information about the document, such at printer, issuer, document ID, batch ID, etc. For checks or securities, this watermark may also carry an amount indicating the maximum allowable value or amount. The reader at a point of sale location, sorter or other check clearing location deems invalid any value amounts over this maximum amount at the time the document is presented for payment or transfer. Also, the robust watermark may carry a job number, printer, or date. This data may be used to identify checks that are out of place or too old, and thus, invalid or flagged for further analysis.
The robust watermark may be included in design artwork, etc. that is printed with inks having special properties. One example is thermal inks that require the document to be adjusted in temperature to read the robust watermark. Inability to read the watermark at a particular temperature can be an added indicator of validity.
In the top of
Having selected the appropriate feature set, parameters and training data, the reader performs the feature analysis to measure features at the appropriate locations on the scanned image, and passes these measures to the classifier. The classifier adapts its operation to the types of metrics and training set selected for the document. It then classifies the document as an original or counterfeit (or specific type of copy).
As noted above, this system provides a number of enhanced capabilities. First, the classifier can be updated or changed over time. For example, classification data (including feature measurements and corresponding classification results) can be logged separately for each document type, and the training data can be updated to include this data for classifications that are deemed to be suitably reliable. As the population of reliable training data grows, the classifier becomes more effective and adapts over time. The training data can be based on the most recent results and/or the most reliable results for each document type. For example, the training data can be constantly updated to be based on the last 1000 reliable detections for each document type.
The operating characteristics of the scanner can also be tracked over time, which enables the classifier to be adapted based on changes in the performance of the scanner. This enables the classifier to compensate for scanner decay over time. As new data is generated, the training data can be updated so that it is current for the most current scanner performance and the most recent data on originals and counterfeits.
To reduce false alarm rate, preferably the training data should be updated with reliable classification data. In addition, documents that are mis-classified, yet later detected in the system can have their corresponding data classification removed from the training set, or updated to correct the system.
Certain documents whose metrics do not map clearly into original or counterfeit classifications can be logged in the system and flagged for follow up. This follow up analysis may lead to another document category being created in the classifier, or a refinement in the selection of metrics for each document type that enable more effective differentiation between originals and counterfeits. In addition, documents flagged for follow-up can be routed to further authentication processes, such as a scan with a higher resolution scanner to check security features that are detectable only at high resolution scans.
Overtime, the classification results can be further analyzed and sub-divided into sub-categories indicating the type of counterfeit. For example, counterfeit types can be classified based on the type of printer used to counterfeit the document. Also, the set of features and parameters used to analyze these document types may be adapted over time. For example, it may be learned that a particular metric will provide more effective differentiation, so the reader may be adapted to analyze that metric and use a classifier that is dependent upon that metric.
As indicated in the diagram, training data may be kept specific to the local device and/or shared among all devices (e.g., such as a group of point of scale scanners, or a group of sorter scanners). Grouping the data from all scanners creates a larger population from which to adapt and refine the classifiers for each document type. This is particularly true for different types of checks from different issuers and printers.
More on Classifiers for Counterfeit Detection
The counterfeit detection problem comes down to making a reliable, automated decision about the authenticity of a particular document. It may be possible to make such a judgment from the detected performance of a single metric. Typically such a metric will be associated with changes to a single physical parameter introduced by the counterfeiting process, and thus may be vulnerable to counterfeits produced by other techniques. Alternatively it is frequently seen that the results from a single detection metric do not give results that individually meet the performance requirements of the system, whereas the results from the combination of two or more metrics will result in a completely satisfactory system. An example of such a situation is shown in
In
We refer to the use of automated methods to divide a multidimensional population into two or more classes as classifier technology. When the classifier is used to separate a population into only two classes, the results may be used as a detector for those classes. The task of the classifier is to learn the behavior of metrics from available data samples from each class (originals and counterfeits) and then use this knowledge to classify an unknown new document as original or counterfeit. Below, we describe different types of classifiers. A probabilistic or Bayesian classifier has been implemented, as has a non parametric classifer called the k Nearest Neighbor (kNN).
Bayesian Classifier
This classifier is based on a simple Bayesian method. It calculates a statistical “distance” from a test object to the center of a probability cluster, in order to approximate the likelihood that the object belongs to that cluster. To classify a test object, the classifier finds which cluster the object is closest to, and assigns the object to that class. In the current design, three classes (original, xerographic, and ink jet print) are being used. If an object is classed as xerographic or print, then it is reported as “copy” to the user.
In order to realize the method above, we need a way to calculate a statistical distance between a class cluster and the test object. This method uses a simplified distance, based on the mean and standard deviation for each class. When using N metrics, each class is represented by a vector of means (M1, M2, M3, . . . MN) and a vector of standard deviations (S1, S2, S3 . . . , SN). The distance is calculated as:
More complicated classification techniques exist, but this method has proven useful for the check authentication classification problem. The calculation is not a real distance, but approximates the relative likelihood of being in a class, if the underlying probability distribution of each cluster has a single peak and is symmetric.
Since the distance to each class must be computed to classify a test object, the classifier must know the mean and standard deviation of each. The classifier has been loaded with these values and validated through a process of training and testing. In training, a portion of the available data is used to determine the values loaded into the classifier. Then the rest of the data is used to test the classifier's ability to distinguish an original from a counterfeit.
kNN Classification
The k Nearest Neighbor (kNN) classifier is a non-parametric classifier that does not rely on any assumption about the distribution of the underlying data. The kNN classifier relies on instance-based learning. For a new data sample to be classified, the k nearest samples to this sample from the training set are identified. The nearest samples are identified using a distance measure, such as Euclidean distance or weighted Euclidean distance. The new sample is then assigned to the class that has the most instances in the k nearest samples. An appropriate value for k can be chosen by a method of cross-validation (running the classifier on random subsets of the traning set). In our experiments, we have used k=5 with a Euclidean distance measure on a normalized set of metrics (metrics normalized to obtain similar dynamic range for each metric).
The advantage of a kNN classifier is its non-linear, non-parametric nature and tolerance to arbitrary data distributions. A kNN classifier can perform well in the presence of multi-modal distributions. Since it uses local information (nearest samples), it can result in an adaptive classifier. In addition, implementation is simple.
The disadvantage of kNN is that it is suboptimal to a Bayes classifier if the data distributions are well defined and Gaussian. It can be susceptible to a large amount of irrelevant and noisy features and therefore benefits from a feature selection process. Although its implementation is simple, it can be computationally expensive if the number of features and training set are very large. However, it lends itself to parallel implementations.
Feature Search Algorithm
By combining features, classifiers can provide performance superior to that of classifiers using a single feature. This is especially true if the features are statistically independent, or nearly so. Ideally, the more features that are included in a classifier, the better will be the performance. Practically, this is not true, due to a phenomenon known as over-training. Since classifiers are trained on a finite set of data and used on a different (and usually much larger) set of test data, mismatches in the statistical distributions of features between these two sets can result in poor classifier performance. The likelihood of mismatch in these distributions is higher for small training sets than it is for large training sets, and classifiers using many features are more sensitive to this mismatch than are classifiers with few features. The end result is that larger training sets are required for reliable design of classifiers with many features. When a small training set is used to train a classifier with many features, the result may be over-training; the classifier performs very well for statistics modeled by the training data, but the statistics of the training set do not match those of the test set.
To find the best performing set of features for a classifier, given a training and test set, requires exhaustively searching among each possible combination of features. In many cases, there are many features to choose from, so this exhaustive search task becomes computationally complex. To reduce complexity, we have used a sub-optimal hierarchical search to lower the search complexity. The features have been grouped into three groups. Group 1 includes those closely related watermark metrics, group 2 includes those features indicative of frequency content of watermarked areas, and group 3 includes those features that are derived from other (non-watermarked) security features. The sub-optimal search begins by finding the best performing classifier using only features from group 1, and the best performing classifier using only features from group 2. The second step of the search algorithm finds the best classifier from among possible combinations. This step requires a search of a subset of the possible classifier designs. The goal in construction of this two-step algorithm was to use the first step to optimize among sets of the most closely related features to minimize interaction effects at the second step. In each search process, if there were multiple classifiers with equal performance, the classifier with the smallest number of features was chosen, to minimize over-training effects. The searches minimized a weighted sum of the probability of missed detection and the probability of false alarm. This weighting gave twice the weight to a false alarm as to a missed detection.
Dual Contrast Watermarks
An objective of dual contrast digital watermarks is to identify counterfeits in which the contrast of the watermark has been modified with respect to the originals. The technique includes using two separate digital watermarks each occupying a specific contrast range. For example, one watermark could have a low contrast appearance, with the other watermark having a high contrast appearance.
As discussed below, the relative strengths of the two watermarks provide the distinction between originals and counterfeits. This makes the task of counterfeiting difficult as the counterfeiter must either reproduce the contrast of both watermarks accurately or modify them in such a way that their relative strengths remain unchanged. Accurate reproduction of contrast of both watermarks is difficult with casual counterfeiting tools (desktop scanners and printers, color copiers, black and white copiers) as they generally tend to enhance one of the watermarks increasing its contrast. Often this results in the other watermark being suppressed. Maintaining the relative strengths of the watermarks is difficult without access to the detector.
The watermarks are designed in such a manner that their interference with each other is reduced. To ensure reduced interference, aspects such as spatial frequency, structures and colors (e.g., as explained above) are exploited in addition to contrast.
Contrast
The two watermarks are intended to have a large separation in their contrast ranges. One watermark occupies the low contrast range and the other occupies the high contrast range. One possible choice for the low contrast range would be the lowest contrast range (given the watermark strength) that can be successfully printed by the security printer and detected by the detection scanners. Similarly, the high contrast range can be determined for a choice of printers and detection scanners.
Contrast enhancement during counterfeiting could result in one of the following cases-
1) The contrast is stretched such that the contrast of the high contrast watermark is increased and that of the low contrast watermark is decreased.
2) The contrast of the high contrast mark is enhanced, while the low contrast mark is unchanged.
3) The contrast of the low contrast mark is enhanced, effectively bringing its contrast range closer to that of the high contrast mark.
In each case above, one of the two watermarks is enhanced—at the detector this usually implies higher detected watermark strength. The above assumes that the grayscale modification curve (or gamma correction) is either not linear or does not preserve dynamic range.
Spatial Frequency
When there are multiple watermarks in an image or document they interfere with each other reducing the detectability of each watermark. For example if the two watermarks had the same synchronization signal comprising an array of peaks in a frequency domain, but different payloads, the synchronization signal strength would be reinforced whereas the two payloads would interfere with each other and be affected by synchronization signal noise. If each watermark had its own synchronization signal and payload then each would interfere with the other.
To reduce cross-watermark interference it is desirable to have the watermarks separated in spatial frequency. One way to achieve frequency separation is to embed the two watermarks at different resolutions. This method allows both watermarks to be of the same type without causing severe interference.
For copy detection testing, the low contrast watermark was embedded at 50 wpi whereas the high contrast watermark was embedded at 100 wpi. Other resolutions may be more appropriate depending upon the image resolution and the resolution of the detection scanners. An additional advantage of embedding at different resolutions is that the peaks in the synchronization signal of the combined watermarks cover a larger frequency range. This can lead to more effective discrimination metrics between originals and counterfeits.
Structures
Design structures provide a means to achieve separation between the watermark features as well as accentuate the differences in contrast ranges. Structures can be favorably exploited to yield increased discrimination between originals and counterfeits
For our copy detection testing, the low contrast watermark is contained in a soft-sparse structure that appears like a background tint pattern. The high contrast watermark is contained in a LCM (line continuity modulation) structure giving it an appearance of distinct line patterns. An example is illustrated in
Colors/Inks
Color can be used as an important attribute to further improve the effectiveness of dual contrast watermarks. For example, inks having out-of-gamut colors could be used for the design structures. Reproducing such colors with CMYK equivalents would affect the contrast of the watermarks thus improving discrimination. More information is provided in Application 60/466,926, which is incorporated above.
A major advantage of the dual contrast technique is that both watermarks could be printed with the same ink. This avoids mis-registration issues. In this case, simply using an out-of-gamut ink would suffice.
In printing environments having high registration accuracy, the dual contrast technique could also be used with multiple inks to help render the counterfeiting process more difficult. In this case, each of the two watermarks would be printed with different inks. The inks could be selected such that their CMYK equivalents have a different contrast range relationship than the original inks.
Metrics from Dual Contrast Watermarks
The detector is run twice on the image—once with the WPI set to 100 to detect the high contrast, higher resolution watermark and then with the WPI set to 50 to detect the low contrast, lower resolution watermark. Each run produces a set of metrics from the core for the targeted watermark. Note that since the core is capable of detecting either (or both) of the watermarks during each run, the system can be designed to only read the targeted watermark. This could be achieved simply by observing the scale parameter of the detected synchronization signal(s) and restricting each run to use the synchronization signal that has scale 1.
The output from the detector core is a set of 2N metrics, N from each of the two runs to extract the two watermarks. These 2N metrics could directly be used as input features in a classifier configuration. In this case the classifier would learn the relationships between the same metric obtained through the two runs and also the relationship across metrics. Another approach is to obtain a new set of relative metrics. Our knowledge of these metrics can be used to devise relative metrics that highlight the differences between originals and counterfeits. This can simplify the design and task of the classifier.
Experimental Setup
The design for the dual contrast watermark consisted of one LCM watermark (high contrast) at 100 wpi and one soft-sparse watermark (low-contrast) at 50 wpi. The design was printed using Pantone 1595, which is an out-of-gamut orange.
Some example metrics are calculated as,
Correlation Strength Normalized Differential (CSND)=abs(CSH−CSL)/CSH*10
The metric is based on the number of peaks detected in the synchronization signal per the total number of peaks in the synchronization signal.
Weighted Correlation Normalized Differential (WCND)=abs(WCH−WCL)/WCH*5
This metric is similar to CS, except that it is weighted by frequency and focuses on peaks of the synchronization signal in the mid-frequency range.
Detection Value Ratio (DVR)=DVH/DVL
This metric is based on a relative metric of the highest correlation value for the synchronization signal relative to the next highest candidate.
PRAM Ratio (PRAMR)=PRAMH/PRAML
This PRAM metric is a comparison of the raw watermark message payload before error correction relative to the actual watermark message.
Signature Level SNR Differential (SNRD)=abs(SNRH−SNRL)
This is similar to the PRAM metric in that it is based on watermark message values, but the measure is converted to a Signal to Noise metric using the standard deviation of the watermark message signal.
In the above, abs(.) denotes the absolute value operation and the subscripts H and L stand for the high contrast and low contrast watermark, respectively.
Introduction
With the adoption of Check Truncation Act, the banking industry will be allowed to destroy physical checks at time of processing and transmit digital images of the check in their place. The images of the checks will be used for all subsequent processing, except when a physical form of the check is needed by a financial institution or customer that does not wish to or cannot use an electronic version.
The physical form of the check when created from the captured image is referred to as an Image Replacement Document (IRD). IRDs are afforded the same legal protection and status of the original check, as such it is subject to the same forms of fraud as the original check and is likely to become a vehicle for new forms of fraud that attack the IRD. management/generation system itself.
Workflow
Applications of Digital Watermarking to the IRD
Digital watermarking can be used to thwart three main categories of attack to the IRD:
1. Re-Origination
While less likely than with than re-origination of an original check, this is still a possibility and is made easier given that IRDs are produced on generic paper with standard office printing hardware.
2. Copying
This form of attack is likely for the IRD. Given how it is produced, the counterfeiter has to make no special effort to reproduce security features (produced with office equipment on generic paper).
3. Data Alteration
Similar to original check, the data on the IRD could be altered. Techniques are currently proposed to protect against this attack that use a visible bar code or data carrying seal structure. These techniques are localized to the bar code or seal structure and not covert.
Watermarks can play role against the three major forms of attack expected against the IRD, but they can also play a role in the system itself. Each time a check, Original IRD, or Subsequent IRD is imaged it is stored in some form in an Image Database. These databases are enormous and will be distributed throughout the financial transaction system (constituent banks and even customers that image at POS).
The volume and heterogeneous nature of the system make it difficult to reliably attach any meta-data to the images. The integrity of the images and related meta-data also needs to be assured.
These challenges/needs are similar to those of professional stock houses that collect and distribute imagery over the Internet. As such all the applications of digital watermarking to digital rights management apply to the IRD workflow as well. For example, U.S. Pat. No. 6,122,403 describes methods for persistently linking images to metadata about the owner of the image as well as other rights. This metadata can include information to authenticate the IRD, such as a secure hash, etc., or a set of rules that govern how the IRD is to be used, processed or transferred.
Finally, the ability to generate Subsequent IRDs based on images generated from prior IRDs places a new requirement on potential watermarking solutions. That being the ability to survive the reproduction process associated with the Subsequent IRD generation process and potentially distinguish between the IRD reproduction processes used by financial institutions and those used by counterfeiters. Robust watermarks designed for survival through these reproduction processes can be used to persistently link to metadata that tracks the processing history of the IRD, including who/what has processed the IRD, at what time and place, and the type of processing that has occurred. In addition, copy detection features, including digital watermarks as described above, may be used to detect different types of reproduction operations.
Digital Anti-Counterfeiting On-Board Mediator (DACOM) Architecture
In this section, we describe how digital watermarks can be used as an on-board mediator for use in authenticating printed documents. Our examples particularly illustrate the DACOM application of digital watermarks in checks, but this application also applies to other secure documents, including value documents like bank notes, securities, stamps, credit and debit cards, etc. and identification documents, like travel documents, driver's licenses, membership cards, corporate badges, etc.
We start with some definitions.
Effective security architectures traditionally rely on multiple levels of security using redundant features at each layer of the system. A system to thwart check fraud will require the same-tiered approach with the added complexity that checks flow through a heterogeneous network of devices and usage scenarios, all of which play a role in eliminating fraud.
For each usage scenario and type of attack, there can be an individual verification process. For example, the tools to perform verification of re-origination at POS (using POS imagers) will be different, although perhaps related, to those used in the high speed sorting environment, the difference a function of imager availability, resolution, color depth, required decision time and acceptable levels of bad decisions. Thus, with three basic forms of attack and four basic usage scenarios, each having a unique verification process, there are 12 scenarios in which a different security feature, or variation thereof, may be employed.
This is all to say that a truly robust system of check verification would be able to look for many features rather than a single feature, some of which may or may not be digital watermarks. For example, a digital scan checking for re-origination might look at characteristics of a known good check (background color, logo placement, etc) whereas the scan for copying might search for a change in print metrics.
Digital Anti-Counterfeiting On-board Mediator (DACOM)
Given a document with multiple digital verification features, there are several approaches to tracking and verifying the features from one individual document to the next:
The specific case for use of digital watermarking as the mediator are as follows:
The DACOM architecture may include four signal levels:
The DACOM protocol preferably includes an extensible protocol that, as a starting point, may contain one or more of the following fields.
DACOM Detection Architecture
The detection architecture is made up of three basic components, the DACOM Watermark Detector, State Machine and one or more Detection Feature Modules. Each component may be from different suppliers and tuned for specific detection environments (high-speed, POS, etc.).
DACOM Integration Into Imaging Workflows
The architecture shown in
A DACOM Detection system can be implemented as either a monolithic or distributed system within or spread throughout various imaging workflows.
Moreover, the use of the system is not limited to checks, but instead, can be used in imaging systems used to process, and more specifically, to authenticate documents. Another example is an identification document, which typically has layers of security features, such as digital watermarks, bar codes, magnetic stripes, holograms, smart cards, RF ID, etc. The DACOM watermark enables the reader system to coordinate authentication of all of these features, as well as trigger other actions, such as database retrievals, database authentication operations, biometric verification among the following sources of biometric information: biometric information derived from the document (e.g., facial image on card, or biometric information in machine readable code on card) biometric derived in real time from the bearer of the document, and biometric information extracted from a biometric database.
Integration with MICR Reader
Digital watermark reading may be integrated within existing reader devices. For checks and financial documents, one such device is a MICR imager. Another is a magnetic stripe reader. For point of sale applications, MICR imager, such as a MICRimager from Magtek, may be integrated with an image capture device (e.g., CCD sensor array) for capturing an image from which a digital watermark is extracted. An example of such an imager is a fixed focal plane skew corrected image sensor.
For additional functionality, the reader can also be equipped with a magnetic stripe reader adapted to extract a security feature called a Magneprint. This Magneprint feature is a unique “fingerprint” of a particular magnetic stripe. In one implementation, it carries information that is used to query a database that associates fingerprint information extracted from the magnetic stripe with the card. This association between the card and stripe fingerprint can be registered at the time of card issuance, in the case of card related documents.
The combined DWM, MICR, and Magneprint reader performs either on-line or off-line authentication and verification of the document. This reading and verification can be performed in a Point of Sale terminal. Off-line verification (verification without reference to external database) is performed by cross-checking information among the digital watermark, Magneprint, and/or MICR through shared message information or functionally related message information (e.g., one is a hash of the other, one is a checksum for the other, etc.). If the predetermined relationship or interdependency between watermark, MICR, and/or Magneprint information is not maintained, the reader deems the document to be invalid. On-line verification may be performed using the MICR, Magneprint, and/or watermark to index database entries for additional information for comparison.
The MICR/magnetic stripe/digital watermark imager may be integrated with other reader devices, such as a bar code reader, smart card reader, laser reader (e.g., for hologram or kinegram embedded information). Imaging can be performed by card transition across an aligned and illuminated linear scan element, or by means of a focused imaging array of sufficient resolution.
Digital Watermarks for Managing Quality of Imaging Systems
In Ser. No. 09/951,143, which is hereby incorporated by reference, we described the use of digital watermarks to measure quality of a particular video or document channel through the use of digital watermark metrics. This particular method may be specifically applied in the context of imaging systems used for documents like checks. With the advent of the check truncation act, the emphasis on quality and uniformity of image capture of check documents is going to increase. Digital watermark metrics, and specifically, some of the watermark metrics described in this document, may be used to monitor the quality of an imaging system, and make sure that scanners that do not fall within a predetermined operating range are detected and calibrated properly.
Previously, calibration is performed by passing test images through an imaging system. However, using digital watermarks associated with quality metrics, there is no longer a need to rely only on test patterns. In the case of checks, arbitrary check designs can be created, each having standard embedded digital watermarks used for quality measurement. Within the system, the watermark detector measures the quality of check images based on metrics extracted from the digital watermark embedded in the check images.
There are a variety of ways to measure image quality. One way is to design the watermark signal so that it is tuned to measure the modulation transfer function of the scanner. One specific example is that the watermark can be used to detect when the frequency response of the scanner has moved outside predetermined ranges. In particular, the frequency characteristics of the watermark, such as the peaks in the synchronization signal or other frequency domain attributes can be measured to detect: change in peak sharpness, and missing peaks or missing watermark signal elements. Similarly watermark message payloads can be redundantly embedded at different frequencies, and the amount of payload recovery at each frequency can be used as a metric to measure frequency response of the scanner and detect when it is out of range.
Another approach is to use the digital watermark to measure tonal response. Scanners tend to have a non-linear tonal response due to gamma correction, etc. The tone levels can be quantized into levels between some predetermined ranges, e.g., 0-127, and 128-255. Then the digital watermark signal can be quantized, or in turn, quantize features of the host image of the document such that it is concentrated at varying tonal levels for these ranges, such as at level 64 for the first range and 192 for the second range. The distortion of the watermark at these tonal levels indicates how the scanner is distorting the dynamic range for the particular tonal regions. In one particular implementation, the digital watermark, or watermarked image is created to have a distinct histogram peaks at various tonal ranges. Distortions of these peaks beyond certain thresholds is an indicator that the scanner is out range and needs to be re-calibrated.
The digital watermark provides an advantage in that it can carry and measure this form of scanner quality metric, and indicate when scanners need to be updated.
Having described and illustrated the principles of the technology with reference to specific implementations, it will be recognized that the technology can be implemented in many other, different, forms. To provide a comprehensive disclosure without unduly lengthening the specification, applicants incorporate by reference the patents and patent applications referenced above.
The methods, processes, and systems described above may be implemented in hardware, software or a combination of hardware and software. For example, the auxiliary data encoding processes may be implemented in a programmable computer or a special purpose digital circuit. Similarly, auxiliary data decoding may be implemented in software, firmware, hardware, or combinations of software, firmware and hardware. The methods and processes described above may be implemented in programs executed from a system's memory (a computer readable medium, such as an electronic, optical or magnetic storage device).
The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this and the incorporated-by-reference patents/applications are also contemplated.
This patent application claims priority to U.S. Provisional Application Nos. 60/430,014, filed Nov. 28, 2002, 60/440,593, filed Jan. 15, 2003, 60/466,926, filed Apr. 30, 2003 and 60/475,389, filed Jun. 2, 2003, which are hereby incorporated by reference. This patent application is also a continuation in part of Ser. No. 10/165,751, filed Jun. 6, 2002, which is a continuation of Ser. No. 09/074,034, filed May 6, 1998 (now U.S. Pat. No. 6,449,377. This patent application is also a continuation in part of Ser. No. 10/012,703, filed Dec. 7, 2001, which is a continuation of Ser. No. 09/433,104, filed Nov. 3, 1999. (now U.S. Pat. No. 6,636,615), which is a continuation in part of Ser. No. 09/234,780, filed Jan. 20, 1999 (now abandoned), which claims priority to 60/071,983, filed Jan. 20, 1998. This patent application is also a continuation in part of Ser. No. 09/898,901, filed Jul. 2, 2001. This application is related to U.S. Pat. Nos. 6,332,031 and 6,449,377, U.S. application Ser. No. 09/938,870, filed Aug. 23, 2001, Ser. No. 09/731,456, filed Dec. 6, 2000, Ser. No. 10/052,895, filed Jan. 17, 2002, Ser. No. 09/840,016, filed Apr. 20, 2001, and International Application PCT/US02/20832, filed Jul. 1, 2002. The above patents and applications are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
2748190 | Yule | May 1956 | A |
4884828 | Burnham et al. | Dec 1989 | A |
5074596 | Castagnoli | Dec 1991 | A |
5291243 | Heckman et al. | Mar 1994 | A |
5315098 | Tow | May 1994 | A |
5337361 | Wang et al. | Aug 1994 | A |
5396559 | McGrew | Mar 1995 | A |
5481378 | Sugano et al. | Jan 1996 | A |
5488664 | Shamir | Jan 1996 | A |
5493677 | Balogh et al. | Feb 1996 | A |
5530759 | Braudaway et al. | Jun 1996 | A |
5617119 | Briggs et al. | Apr 1997 | A |
5664018 | Leighton | Sep 1997 | A |
5687297 | Coonan et al. | Nov 1997 | A |
5734752 | Knox | Mar 1998 | A |
5745604 | Rhoads | Apr 1998 | A |
5748763 | Rhoads | May 1998 | A |
5768426 | Rhoads | Jun 1998 | A |
5781653 | Okubo | Jul 1998 | A |
5790703 | Wang | Aug 1998 | A |
5824447 | Tavernier et al. | Oct 1998 | A |
5850481 | Rhoads | Dec 1998 | A |
5949055 | Fleet et al. | Sep 1999 | A |
5995638 | Amidror et al. | Nov 1999 | A |
6014453 | Sonoda et al. | Jan 2000 | A |
6026193 | Rhoads | Feb 2000 | A |
6064764 | Bhaskaran et al. | May 2000 | A |
6091844 | Fujii et al. | Jul 2000 | A |
6122392 | Rhoads | Sep 2000 | A |
6198454 | Sharp et al. | Mar 2001 | B1 |
6198545 | Ostromoukhov et al. | Mar 2001 | B1 |
6233684 | Stefik et al. | May 2001 | B1 |
6266430 | Rhoads | Jul 2001 | B1 |
6275599 | Adler et al. | Aug 2001 | B1 |
6285775 | Wu et al. | Sep 2001 | B1 |
6285776 | Rhoads | Sep 2001 | B1 |
6289108 | Rhoads | Sep 2001 | B1 |
6330335 | Rhoads | Dec 2001 | B1 |
6332031 | Rhoads et al. | Dec 2001 | B1 |
6343138 | Rhoads | Jan 2002 | B1 |
6345104 | Rhoads | Feb 2002 | B1 |
6353672 | Rhoads | Mar 2002 | B1 |
6363159 | Rhoads | Mar 2002 | B1 |
6389151 | Carr et al. | May 2002 | B1 |
6400827 | Rhoads | Jun 2002 | B1 |
6404898 | Rhoads | Jun 2002 | B1 |
6427020 | Rhoads | Jul 2002 | B1 |
6430302 | Rhoads | Aug 2002 | B2 |
6434322 | Kimura et al. | Aug 2002 | B1 |
6449377 | Rhoads | Sep 2002 | B1 |
6449379 | Rhoads | Sep 2002 | B1 |
6496591 | Rhoads | Dec 2002 | B1 |
6519352 | Rhoads | Feb 2003 | B2 |
6522771 | Rhoads | Feb 2003 | B2 |
6535618 | Rhoads | Mar 2003 | B1 |
6539095 | Rhoads | Mar 2003 | B1 |
6542618 | Rhoads | Apr 2003 | B1 |
6542620 | Rhoads | Apr 2003 | B1 |
6560349 | Rhoads | May 2003 | B1 |
6567534 | Rhoads | May 2003 | B1 |
6567535 | Rhoads | May 2003 | B2 |
6567780 | Rhoads | May 2003 | B2 |
6574350 | Rhoads et al. | Jun 2003 | B1 |
6580819 | Rhoads | Jun 2003 | B1 |
6587821 | Rhoads | Jul 2003 | B1 |
6590997 | Rhoads | Jul 2003 | B2 |
6591009 | Usami et al. | Jul 2003 | B1 |
6611599 | Natarajan | Aug 2003 | B2 |
6636615 | Rhoads et al. | Oct 2003 | B1 |
6647129 | Rhoads | Nov 2003 | B2 |
6654480 | Rhoads | Nov 2003 | B2 |
6654887 | Rhoads | Nov 2003 | B2 |
6675146 | Rhoads | Jan 2004 | B2 |
6690811 | Au et al. | Feb 2004 | B2 |
6721440 | Reed et al. | Apr 2004 | B2 |
6724912 | Carr et al. | Apr 2004 | B1 |
6738495 | Rhoads et al. | May 2004 | B2 |
6744906 | Rhoads et al. | Jun 2004 | B2 |
6744907 | Rhoads | Jun 2004 | B2 |
6750985 | Rhoads | Jun 2004 | B2 |
6754377 | Rhoads | Jun 2004 | B2 |
6757406 | Rhoads | Jun 2004 | B2 |
6768808 | Rhoads | Jul 2004 | B2 |
6771796 | Rhoads | Aug 2004 | B2 |
6778682 | Rhoads | Aug 2004 | B2 |
6782116 | Zhao et al. | Aug 2004 | B1 |
6785815 | Serret-Avila et al. | Aug 2004 | B1 |
6804379 | Rhoads | Oct 2004 | B2 |
6834344 | Aggarwal et al. | Dec 2004 | B1 |
6882738 | Davis et al. | Apr 2005 | B2 |
6940995 | Choi et al. | Sep 2005 | B2 |
6944298 | Rhoads | Sep 2005 | B1 |
6959100 | Rhoads | Oct 2005 | B2 |
6959386 | Rhoads | Oct 2005 | B2 |
6970259 | Lunt et al. | Nov 2005 | B1 |
6970573 | Carr et al. | Nov 2005 | B2 |
6978036 | Alattar et al. | Dec 2005 | B2 |
6983051 | Rhoads | Jan 2006 | B1 |
6987862 | Rhoads | Jan 2006 | B2 |
6993152 | Patterson et al. | Jan 2006 | B2 |
7003132 | Rhoads | Feb 2006 | B2 |
7017045 | Krishnamachari | Mar 2006 | B1 |
7020285 | Kirovski | Mar 2006 | B1 |
7027189 | Umeda | Apr 2006 | B2 |
7046808 | Metois et al. | May 2006 | B1 |
7054461 | Zeller et al. | May 2006 | B2 |
7054462 | Rhoads et al. | May 2006 | B2 |
7054463 | Rhoads et al. | May 2006 | B2 |
7076084 | Davis et al. | Jul 2006 | B2 |
7113615 | Rhoads et al. | Sep 2006 | B2 |
7130087 | Rhoads | Oct 2006 | B2 |
7171020 | Rhoads et al. | Jan 2007 | B2 |
7181022 | Rhoads | Feb 2007 | B2 |
7184570 | Rhoads | Feb 2007 | B2 |
7239734 | Alattar et al. | Jul 2007 | B2 |
7242790 | Rhoads | Jul 2007 | B2 |
7263203 | Rhoads et al. | Aug 2007 | B2 |
7266217 | Rhoads et al. | Sep 2007 | B2 |
7269275 | Carr et al. | Sep 2007 | B2 |
7286684 | Rhoads et al. | Oct 2007 | B2 |
7286685 | Brunk et al. | Oct 2007 | B2 |
7305117 | Davis et al. | Dec 2007 | B2 |
7313253 | Davis et al. | Dec 2007 | B2 |
7321667 | Stach | Jan 2008 | B2 |
7340076 | Stach et al. | Mar 2008 | B2 |
7359528 | Rhoads | Apr 2008 | B2 |
7372976 | Rhoads et al. | May 2008 | B2 |
7400743 | Rhoads et al. | Jul 2008 | B2 |
7415129 | Rhoads | Aug 2008 | B2 |
7418111 | Rhoads | Aug 2008 | B2 |
7424132 | Rhoads | Sep 2008 | B2 |
7446891 | Haas et al. | Nov 2008 | B2 |
7499564 | Rhoads | Mar 2009 | B2 |
7532741 | Stach | May 2009 | B2 |
7536555 | Rhoads | May 2009 | B2 |
7537170 | Reed et al. | May 2009 | B2 |
7539325 | Rhoads et al. | May 2009 | B2 |
7548643 | Davis et al. | Jun 2009 | B2 |
7555139 | Rhoads et al. | Jun 2009 | B2 |
7567686 | Rhoads | Jul 2009 | B2 |
7570784 | Alattar | Aug 2009 | B2 |
7602940 | Rhoads et al. | Oct 2009 | B2 |
7602977 | Rhoads et al. | Oct 2009 | B2 |
7606390 | Rhoads | Oct 2009 | B2 |
7639837 | Carr et al. | Dec 2009 | B2 |
7672477 | Rhoads | Mar 2010 | B2 |
7676059 | Rhoads | Mar 2010 | B2 |
7702511 | Rhoads | Apr 2010 | B2 |
7720249 | Rhoads | May 2010 | B2 |
7720255 | Rhoads | May 2010 | B2 |
7724919 | Rhoads | May 2010 | B2 |
7778437 | Rhoads et al. | Aug 2010 | B2 |
7796826 | Rhoads et al. | Sep 2010 | B2 |
7831062 | Stach | Nov 2010 | B2 |
20010020270 | Yeung et al. | Sep 2001 | A1 |
20010022848 | Rhoads | Sep 2001 | A1 |
20010030759 | Hayashi et al. | Oct 2001 | A1 |
20010047478 | Mase | Nov 2001 | A1 |
20020031240 | Levy et al. | Mar 2002 | A1 |
20020037093 | Murphy | Mar 2002 | A1 |
20020054355 | Brunk | May 2002 | A1 |
20020054692 | Suzuki et al. | May 2002 | A1 |
20020061121 | Rhoads et al. | May 2002 | A1 |
20020061122 | Fujihara et al. | May 2002 | A1 |
20020076082 | Arimura et al. | Jun 2002 | A1 |
20020080995 | Rhoads | Jun 2002 | A1 |
20020095577 | Nakamura et al. | Jul 2002 | A1 |
20020099943 | Rodriguez et al. | Jul 2002 | A1 |
20020106102 | Au et al. | Aug 2002 | A1 |
20020114456 | Sako | Aug 2002 | A1 |
20020126842 | Hollar | Sep 2002 | A1 |
20020136429 | Stach et al. | Sep 2002 | A1 |
20020144130 | Rosner et al. | Oct 2002 | A1 |
20020150246 | Ogino | Oct 2002 | A1 |
20020176114 | Zeller et al. | Nov 2002 | A1 |
20020178368 | Yin et al. | Nov 2002 | A1 |
20020181732 | Safavi-Naini et al. | Dec 2002 | A1 |
20030021440 | Rhoads | Jan 2003 | A1 |
20030033529 | Ratnakar et al. | Feb 2003 | A1 |
20030081779 | Ogino | May 2003 | A1 |
20030099374 | Choi et al. | May 2003 | A1 |
20030138128 | Rhoads | Jul 2003 | A1 |
20030156733 | Zeller et al. | Aug 2003 | A1 |
20040057581 | Rhoads | Mar 2004 | A1 |
20040075869 | Hilton et al. | Apr 2004 | A1 |
20040091131 | Honsinger et al. | May 2004 | A1 |
20040145661 | Murakami et al. | Jul 2004 | A1 |
20040181671 | Brundage et al. | Sep 2004 | A1 |
20050111027 | Cordery et al. | May 2005 | A1 |
20050114668 | Haas et al. | May 2005 | A1 |
20060028689 | Perry et al. | Feb 2006 | A1 |
20060062386 | Rhoads | Mar 2006 | A1 |
20060075240 | Kalker et al. | Apr 2006 | A1 |
20070016790 | Brundage et al. | Jan 2007 | A1 |
20070172097 | Rhoads et al. | Jul 2007 | A1 |
20070172098 | Rhoads | Jul 2007 | A1 |
20070180251 | Carr | Aug 2007 | A1 |
20070201835 | Rhoads | Aug 2007 | A1 |
20080131083 | Rhoads | Jun 2008 | A1 |
20080131084 | Rhoads | Jun 2008 | A1 |
20080149713 | Rhoads et al. | Jun 2008 | A1 |
20080253740 | Rhoads | Oct 2008 | A1 |
20080275906 | Rhoads | Nov 2008 | A1 |
20090252401 | Davis et al. | Oct 2009 | A1 |
20100008534 | Rhoads | Jan 2010 | A1 |
20100008536 | Rhoads | Jan 2010 | A1 |
20100008537 | Rhoads | Jan 2010 | A1 |
20100021004 | Rhoads | Jan 2010 | A1 |
20100027969 | Alattar | Feb 2010 | A1 |
20100040255 | Rhoads | Feb 2010 | A1 |
20100119108 | Rhoads | May 2010 | A1 |
20100131767 | Rhoads | May 2010 | A1 |
20100142752 | Rhoads et al. | Jun 2010 | A1 |
20100146285 | Rhoads et al. | Jun 2010 | A1 |
20100163629 | Rhoads et al. | Jul 2010 | A1 |
20100172538 | Rhoads | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
0921675 | Jun 1999 | EP |
0961239 | Dec 1999 | EP |
1202250 | May 2002 | EP |
351 960 | Jun 1931 | GB |
WO 0225599 | Mar 2002 | WO |
WO 0250773 | Jun 2002 | WO |
WO 02056264 | Jul 2002 | WO |
WO 02059712 | Aug 2002 | WO |
WO 02084990 | Oct 2002 | WO |
WO 02089057 | Nov 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20040263911 A1 | Dec 2004 | US |
Number | Date | Country | |
---|---|---|---|
60430014 | Nov 2002 | US | |
60440593 | Jan 2003 | US | |
60466926 | Apr 2003 | US | |
60475389 | Jun 2003 | US | |
60071983 | Jan 1998 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09074034 | May 1998 | US |
Child | 10165751 | US | |
Parent | 10723181 | US | |
Child | 10165751 | US | |
Parent | 09433104 | Nov 1999 | US |
Child | 10012703 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10165751 | Jun 2002 | US |
Child | 10723181 | US | |
Parent | 10012703 | Dec 2001 | US |
Child | 10723181 | US | |
Parent | 09234780 | Jan 1999 | US |
Child | 09433104 | US | |
Parent | 10723181 | US | |
Child | 09433104 | US | |
Parent | 09898901 | Jul 2001 | US |
Child | 10723181 | US |