Database for detecting counterfeit items using digital fingerprint records

Information

  • Patent Grant
  • 11423641
  • Patent Number
    11,423,641
  • Date Filed
    Monday, November 23, 2020
    4 years ago
  • Date Issued
    Tuesday, August 23, 2022
    2 years ago
Abstract
Improvements are disclosed for detecting counterfeit objects, based on comparison to digital fingerprints that describe features found in images of objects known to be counterfeit.
Description
COPYRIGHT NOTICE

© 2018 Alitheon, Inc. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. 37 CFR § 1.71(d).


TECHNICAL FIELD

This invention pertains to methods and apparatus to identify and/or authenticate physical items, including documents, and more specifically to using a secure database of digital fingerprint records to detect a counterfeit item.


BACKGROUND OF THE INVENTION

Counterfeiting of manufactured goods is a worldwide problem, with recent studies estimating that 8% of the world's total GDP is now generated by the manufacturing and sales of counterfeit products. Many classes of counterfeit goods create substantial risks to public health including counterfeit pharmaceutical drugs, auto parts, pesticides, and children's toys. In addition, counterfeit computer chips, aerospace parts, and identification documents present significant risks to national security.


Many different approaches have been tried to uniquely identify and authenticate objects, including serial numbers, bar codes, holographic labels, RFID tags, and hidden patterns using security inks or special fibers. All of these methods can be duplicated, and many add a substantial extra cost to the production of the goods being protected. In addition, physically marking certain objects such as artwork, gemstones, and collector-grade coins can damage or destroy the value of the object.


If identifying or certifying information is stored separately from the object in the form of a label, tag, or certificate the entire identification/certification process must typically be performed again if the object is lost and later recovered, or its chain of control is otherwise compromised. There is a need for solutions that can prove the provenance of an object once the chain of custody is disrupted by the removal of the object from safe custody and/or the loss of the associated identification or certification information.


Other known techniques call for comparing bitmaps of images of the objects themselves, or selected regions of interest. Referring now to FIG. 8, the image of the original object is taken and stored for reference. The whole image is stored, although it may be compressed for efficiency. When a new object is encountered, an image is taken of the new object and directly compared to the original image using XOR or similar algorithms. If there are no (or only statistically insignificant) differences, the images are declared a match and the object is authenticated. Further, FFT or similar transforms may be used to generate a “digital signature” of the image that can be used for comparison. See FIG. 9. However, as in the previous case the same method is used—the resultant bitmapped image is compared with another bitmapped image, and if the pixels match the object is authenticated. Such methods are disclosed in U.S. Pat. No. 7,680,306 to Boutant et al. Bitmapped techniques are inefficient due to issues like file size, and have serious limitations that make them effectively unusable in most real world applications, due to variable lighting and orientation of the images, and the authentication of worn, damaged or otherwise altered objects.


SUMMARY OF THE INVENTION

The following is a summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.


A physical object is scanned and a digital image of the object is created from the scan. A subset of the image known as an “authentication region” is selected. A set of features is extracted from the authentication region, which is sufficient to create a unique identifier or “digital fingerprint” for that object. The digital fingerprint may be registered in a database.


To select locations in an image to extract fingerprint features, a software process automatically selects a large number—typically hundreds or even thousands per square mm—of preferred areas of interest for purposes of digital fingerprint. A location may be of interest because of a relatively high level of content. That “content” in a preferred embodiment may comprise a gradient or vector, including a change in value and a direction.


In a preferred embodiment, each such area of interest is identified as a circle, for example, by centroid location and radius. Within each circular area of interest, the software then extracts one or more fingerprint features that define the relevant shapes within the corresponding circular location of the image. Each fingerprint feature preferably is stored as a feature vector as illustrated below. A feature vector preferably is an array of integer or floating point values describing an individual shape.


When an object is to be authenticated, a suitable system compares the digital fingerprint of the object to digital fingerprints previously stored in the database, and based on that comparison determines whether the object has been registered before, and is thus authentic. The digital fingerprint data specifies a set of features. Preferably, an “object feature template” may be created which has a list of specific features and attributes that are relevant for authenticating a particular class of objects. A template may identify locations of particular features. One of the key advantages of the feature based method is that when the object is very worn from handling or use, the system can still identify the object as original, may be impossible with the bitmapped approach.


Another aspect of this disclosure relates to detecting a counterfeit or forged object, for example a document such as a drivers license or passport. In this case, there may be no “original” or source object digital fingerprint for comparison. Rather, “fingerprints” of known indicia of counterfeit or forged objects can be acquired and stored. For example, a large number of counterfeit New York State driver's licenses might be obtained by law enforcement officials in a raid or the like. Digital images of those forged documents can be acquired, and analyzed to form digital fingerprints, as described in more detail below. “Forgery feature vectors” of typical features that occur in the counterfeit licenses can be collected and stored in a database. Such indicia may include, for example, sharp, non-bleeding edges where a photograph has been replaced or torn paper fibers where an erasure occurred. These stored features from the counterfeit licenses can then be analyzed and stored as a reference set of fraudulent methods which can then be compared to new license fingerprints to detect a forged document. A count of “fraud indicator matches” can be compared to an empirical threshold to determine and quantify a confidence that a document is forged (or not).


Further, the fingerprinting approach described below can be used to determine whether a manufactured object meets its manufactured specifications. Applications of the system include but are not limited to object authentication, anti-counterfeiting, determining the provenance of an object, and compliance with manufacturing specifications.


Additional aspects and advantages of this invention will be apparent from the following detailed description of preferred embodiments, which proceeds with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an example of an authentication region and object feature template definition for a U.S. passport.



FIG. 2 is a simplified flow diagram of a process for digital fingerprint generation and registration.



FIG. 3 is a simplified flow diagram of a process for authentication of a previously fingerprinted object.



FIG. 4 is a simplified flow diagram illustrating a process for object inspection to detect evidence of counterfeits.



FIG. 5 is a simplified flow diagram illustrating an object manufacturing inspection process.



FIG. 6 is a simplified flow diagram illustrating a method for building a database for use in detecting forged or altered documents.



FIG. 7 is a simplified flow diagram illustrating a method for using a digital fingerprint of a suspect document to detect a potential forger or alteration associated with the document.



FIG. 8 is a simplified diagram of a prior art bitmap comparison method for comparing images.



FIG. 9 is an example of a photograph and an image created by Fast Fourier Transform (FFT) of the image data.



FIG. 10 is a simple illustration of fingerprint feature extraction from an original digital image.



FIG. 11 is a simple illustration of fingerprint feature extraction from a comparison or candidate image.



FIG. 12 is a simple illustration of fingerprint feature comparison for identifying or authenticating an object.



FIG. 13A shows an image of the numeral “3” representing the first digit in a serial number of an “original” or known U.S. dollar bill.



FIG. 13B shows an image of the numeral “3” representing the first digit in a serial number a U.S. dollar bill to be authenticated.



FIG. 14A is an illustration of results of reature extraction showing selected areas of interest in the image of FIG. 13A.



FIG. 14B is an illustration of results of feature extraction sowing selected areas of interest in the image of FIG. 13B.



FIG. 15A shows the same dollar bill image as in FIG. 13A, juxtaposed with FIG. 15B for comparison.



FIG. 15B shows an image of the numeral “3” that has been damaged or degraded.



FIG. 16A shows detail of two fingerprint feature locations on the numeral 3.



FIG. 16B shows detail of the damaged bill with the corresponding fingerprint feature locations called out for comparison.



FIG. 17 is a simplified illustration of a rotational transformation in the process of comparing digital fingerprints of two images.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The methods described in this disclosure enable the identification of objects without attaching or associating any physical tags or materials with the object. A system does this by creating a unique digital signature for the object, which is referred to as a digital fingerprint. Digital fingerprinting utilizes the natural structure of the object, or essentially random features created incidental to the manufacturing process, to generate a unique digital signature for that object, much like a human fingerprint. Also like a human fingerprint, the digital fingerprint can be stored and retrieved to identify objects when they are encountered at a later date.


Eliminating the need to add tags or any physical modifications to the object offers a number of advantages to manufacturers, distributors, sellers and owners of goods. It reduces the cost of manufacturing, and is more secure than physical tagging. Physical tags may be lost, modified, stolen, duplicated, or counterfeited; digital fingerprints cannot.


Unlike prior art approaches that simply utilize a comparison of pixels, a system in accordance with this disclosure utilizes the extraction of features to identify and authenticate objects. Feature extraction enables us to take a large amount of information and reduce it to a smaller set of data points that can be processed more efficiently. For example, a large digital image that contains tens of thousands of pixels may be reduced to just a few features that can effectively identify the object. This reduced set of data we call a digital fingerprint. This digital fingerprint contains a set of individual fingerprint features which are stored as feature vectors. These vectors make image processing more efficient and reduce storage requirements, as the entire image need not be stored in the database, only the feature vectors. Examples of feature extraction algorithms include but are not limited to edge detection, corner detection, blob detection, wavelet features; Gabor, gradient and steerable output filter histograms, scale-invariant feature transformation, active contours, shape contexts and parameterized shapes.


While the most common applications of our system may be in the authentication of manufactured goods and documents, the system is designed to be applicable to any object that can be identified, characterized, quality tested, or authenticated with a digital fingerprint. These include but are not limited to mail pieces, parcels, art, coins, currency, precious metals, gems, jewelry, apparel, mechanical parts, consumer goods, integrated circuits, firearms, pharmaceuticals and food and beverages. Here we use the term “system” in a broad sense, including our methods as well as apparatus arranged to implement such methods.


Scanning


In an embodiment, an object is scanned and identified either at initial manufacture or at the time of first contact with the system. This point of identification is preferably done when the item is either in the possession of its manufacturer, or has been transferred by secure means to the current holder so that its legitimacy at point of identification is adequately established. When such a process is impossible, as in the example of artworks or old coins, the object may be fingerprinted after the object is authenticated by an expert while its provenance is still secure.


In this application, we use the term “scan” in a broad sense. We refer to any means for capturing an image or set of images, which may be in digital form or transformed into digital form. The images may be two dimensional, three dimensional, or be in the form of a video. Thus a “scan” may refer to an image (or digital data that defines an image) captured by a scanner, a camera, a specially-adapted sensor array such as CCD array, a microscope, a smart phone camera, a video camera, an x-ray machine, etc. Broadly, any device that can sense and capture electromagnetic radiation that has traveled through an object, or reflected off of an object, is a candidate to create a “scan” of the object. Other means to extract “fingerprints” or features from an object may be used; for example, through sound, physical structure, chemical composition, or many others. The remainder of this application will use terms like “image” but when doing so, the broader uses of this technology should be implied. In other words, alternative means to extract “fingerprints” or features from an object should be considered equivalents within the scope of this disclosure.


Authentication Regions


Because the system works with many different types of objects, it is necessary to define what parts of the digital images of the objects are to be used for the extraction of features for authentication purposes. This can vary widely for different classes of objects. In some cases it is the image of the entire object; in other cases it will be a specific sub-region of the image of the object.


For instance, for a photograph we may want to use the digital image of the entire photograph for feature extraction. Each photograph is different, and there may be unique feature information anywhere in the photograph. So in this case, the authentication region will be the entire photograph.


Multiple regions may be used for fingerprints for several reasons, two of which are particularly important. It may be that there are several regions where significant variations take place among different similar objects that need to be distinguished while, in the same objects, there may be regions of little significance, i.e., in which there is little or no variation among different objects. In that case, the authentication region is used primarily to eliminate regions of little interest.


A bank note, for example, has a sufficient number of unique features that it can be authenticated if a few small arbitrary regions scattered across the surface are fingerprinted, along with recognizing the contents of a region telling the value of the bank note and one containing the bank note's serial number. In such a case the fingerprints of any region (along with sufficient additional information to determine the bank note's value and its purported identity) may be sufficient to establish the authenticity of the bill and multiple fingerprinted regions are used solely in the event that one or more regions may be absent (through, for example, tearing) when the bill is later presented for authentication.


Sometimes, however, specific regions of an item must be authenticated to ensure the item is both authentic and has not been altered. A passport provides an example. On a passport the features preferably used for authentication are extracted from regions containing such specific identification information as the passport number, recipient name, and recipient photo. In that case, we define a template of all those regions whose alteration from the original would invalidate the passport, such regions including the passport holder's photo and unique personal data.



FIG. 1 illustrates an example of an authentication region and object feature template definition for a U.S. passport. In this figure, brace 100 refers to a simplified flow diagram of a process as follows. At process block 102, an object is scanned to generate an original “image”—technically a digital data file in any suitable format. We will simply refer to this data as an image. The original image is illustrated as the front page of a U.S. passport 150. Next, the system processes the image data to determine an authentication region. For example, here the authentication region is the lower portion of image 150, identified by dashed box 154. Next the process generates an authentication image for feature extraction, block 106. The image is illustrated at reference 156. Next, at block 108, the process defines one or more features for extraction. These are shown in the image 158 by dashed boxes 160, for example, surname, given name, and passport number regions.


Finally, at block 110, the process 100 comprises creating a feature template 120. In this example, template 120 identifies an object class (U.S. Passport), defines an authentication regions (for example, by X-Y coordinates), and it lists one or more features within that authentication region. Here, the list comprises passport number, photo, first name and last name.


The ability to define and store the optimal authentication region for a given class of objects offers significant benefits to the user. In many cases it is much easier to scan a limited region of an object than the entire object. For instance, in the case of an article of designer clothing, it is much easier to take a picture of the manufacturer's label than it is to take a picture of the entire garment. Further, defining such regions enable the detection of partial alteration of the object.


Once an authentication region is defined, specific applications can be created for different markets and classes of objects that can assist the user in locating and scanning the optimal authentication region. For instance, an appropriately sized location box and crosshairs can automatically appear in the viewfinder of a smartphone camera application to help the user center the camera on the authentication region, and automatically lock onto the region and take the picture when the camera is focused on the correct area.


In many cases, objects may have permanent labels or other identifying information attached to them. These can also be used as features. For instance, wine may be put into a glass bottle and a label affixed to the bottle. Since it is possible for a label to be removed and reused, simply using the label itself as the authentication region is often not sufficient. In this case we may define the authentication region to include both the label and the substrate it is attached to—in this case some portion of the glass bottle. This “label and substrate” approach may be useful in defining authentication regions for many types of objects, such as consumer goods and pharmaceutical packaging. If a label has been moved from it's original position, this can be an indication of tampering or counterfeiting. If the object has “tamper-proof” packaging, this may also be useful to include in the authentication region.


In some cases, we will want to use multiple authentication regions to extract unique features. For a firearm, for example, we might extract features from two different parts of the weapon. It is, of course, important that both match the original, but since the two parts may both have been taken from the original weapon and affixed to a weapon of substandard quality, it may also be important to determine whether their relative positions have changed as well. In other words it may be necessary to determine that the distance (or other characteristic) between Part A′s authentication region and Part B′s authentication region is effectively unchanged, and only if that is accomplished can the weapon be authenticated.


Object Feature Template Definition

When a new type or class of object is being scanned into the system for the first time, the system can create an Object Feature Template (as shown in FIG. 1) that can be used to optimize subsequent authentication operations for that class of objects. This template can either be created automatically by the system, or by using a human-assisted process.


An Object Feature Template is not required for the system to authenticate an object, as the system can automatically extract features and create a digital fingerprint of an object without it. However, the presence of a template can greatly optimize the authentication process and add additional functionality to the system.









TABLE 1





Example Object Feature Template.

















CLASS:







[Description of the object]


United States Passport


AUTHENTICATION REGION:


[Description of the authentication regions for the object]


Region 1: (x1, y1, z1), (x2, y2, z2)


.


.


Region n


REGION MATCH LIST


[List of the regions that are required to match to identify an object]


Region List: 1..n


FEATURES:


[Key features of the object]


 Feature 1: Passport Number


Feature 2: Photo


Feature 3: First Name


Feature 4: Last Name


.


.


Feature n


METHODS:


[Programs that can be run on features of an object]


Feature 2:









Photo Method 1: [checkphoto.exe] Check for uneven edges indicating



photo substitution



.



.



Method n







Feature n









Method n







ADDITIONAL DATA


[Additional data associated with the object]


Data 1: example data


.


.


Data n









The uses of the Object Feature Template include but are not limited to determining the regions of interest on the object, the methods of extracting fingerprinting and other information from those regions of interest, and methods for comparing such features at different points in time. The name “object feature template” is not important; other data with similar functionality (but a different moniker) should be considered equivalent.


Four different but related uses for this technology are particularly in view in this disclosure. These are illustrative but are not intended to be limiting of the scope of the disclosure. These applications may be classified broadly as (1) authentication of a previously scanned original, (2) detection of alteration of a previously scanned original, (3) detection of a counterfeit object without benefit of an original, and (4) determination whether a manufactured item is within manufacturing or other applicable specification.


In case (1), the object is fingerprinted during the creation process (or while its provenance is unquestioned), or at the point where an expert has determined its authenticity and then the object is later re-fingerprinted, and the two sets of fingerprints are compared to establish authenticity of the object. This may be done by extracting a single fingerprint from the entire object or by extracting multiple sets of features from different authentication regions. It may also be facilitated by reading or otherwise detecting a serial number or other identifying characteristic of the object using optical character recognition or other means to make determining which original to compare it with easier. In many cases, manufacturing databases use serial numbers as identifiers. If we know the serial number we can directly access the database record for the object, and can directly compare the digital fingerprint to the original that was stored during the creation process, rather than searching the entire digital fingerprinting database for a match.


In case (2), the object is compared region by region with the original looking for low or nonexistent match of the fingerprint features from those regions. While case (1) is designed to determine whether the original object is now present, this case (2) is to determine whether the original object has been modified and if so, detecting how. In some embodiments, regions of interest having poor or no matching fingerprint features are presumed to have been altered.


In case (3), the item may not have been fingerprinted while its provenance was secure. An example would be legacy bills or passports created prior to initiating the use of a digital fingerprinting system during the creation process. In this case, the fingerprints of regions of interest may be compared with fingerprints from examples of known counterfeit objects, or with both those and fingerprints of known good objects. As an example, if a photo is added to a passport, the edge of the photo is liable to be sharper than the edge of the original, unaltered photo, indicating a cut and paste operation. Fingerprint characteristics of known good passports and those of passports known to have been altered by changing the photograph can be compared with the passport being inspected to determine whether it shows features of alteration.



FIG. 6 is a simplified flow diagram of a process 600 for building a database for use in detecting counterfeit (forged or altered) objects. At process block 602, digital image data is acquired of a known forged or altered document. Next, we extract features from the image data, as discussed above, block 604. Continuing, at block 606 a digital fingerprint is created based on the extracted features.


The digital fingerprint data is stored in a database record, block 608. Further, the record (digital fingerprint) is designated in the database as having features tending to evidence a forgery, block 610. The basic process may be repeated, loop 650, to acquire more images, and more features, to build the database.


Returning to case (4), the question of authenticity or alteration is not at issue. Instead we use the fingerprinting process to determine whether an object was manufactured sufficiently close to the manufacturing specification. In this case comparison of fingerprint features is against the ideal features of a presumed-perfect object, referred to as the “reference object”. The reference object may exist (e.g. be one or more examples of the object that has been inspected by hand and declared good enough to serve as a standard) or may be a programmatic ideal. In this latter case the “ideal” fingerprint features will be generated manually or by a program rather than scanned off an original.


The Object Feature Template can contain a variety of information related to that class of objects. For instance, it would typically include the authentication region(s) for that class of objects, which authentication regions are required to determine a match, and a list of key features that are typically used in authenticating that object.


Additionally, a template can define methods to be applied to features that can be used to examine an object for signs of unauthorized modification or counterfeiting. For instance, every time a passport is scanned into the system, a program can automatically be run to examine the passport photo for signs of alteration. If the passport was fingerprinted at creation, fingerprints extracted from each such region at creation will be compared to fingerprints from corresponding regions when the passport is presented for authentication. If the passport was not fingerprinted at creation, the region template can be used, for example, to look for sharp, non-bleeding edges that can indicate where a photograph has been replaced or torn paper fibers can indicate where an erasure occurred. In addition to the examples discussed above, the Object Feature Template is designed to be extensible, and can store any additional data that is related to the object.


Digital Fingerprint Generation

Once an object has been scanned and at least one authentication region has been identified, the final digital image that will be used to create the unique digital fingerprint for the object is created. This image (or set of images) will provide the source information for the feature extraction process.


A “digital fingerprinting feature” is a feature of the object that is innate to the object itself, a result of the manufacturing process, a result of external processes, or of any other random or pseudo random process. For example, gemstones have a crystal pattern which provides an identifying feature set. Every gemstone is unique and every gem stone has a series of random flaws in its crystal structure. This crystal pattern may be used to generate feature vectors for identification and authentication.


A “feature” in this description is typically not concerned with reading or recognizing meaningful content by using methods like OCR (optical character recognition). For example, a label on a scanned object with a printed serial number may give rise to various features in fingerprint processing, some of which may become part of a digital fingerprint feature set or vector that is associated with the object. The features may refer to light and dark areas, locations, spacing, ink blobs, etc. This information may refer to the printed serial number on the label, but in the normal course of feature extraction during the fingerprinting process there is no effort to actually “read” or recognize the printed serial number.


As part of identifying the object, however, for ease of comparison of fingerprint features with those of the original which are stored in the object database, such information may in fact be read and stored by utilizing such techniques as optical character recognition. In many cases, serial numbers may be used as the primary index into a manufacturer's database, which may also contain the digital fingerprints. It would be far faster, for example, to determine whether a bank note being inspected is a match with a particular original if we can use the serial number, say “A93188871A” as an index into the digital fingerprinting database, rather than trying to determine which one it matches by iterating through many thousands of fingerprints. In this case (and in similar cases of weapon and passport serial numbers), the index recognition speeds up the comparison process but is not essential to it.


Once a suitable digital fingerprint of an object is generated, it may be stored or “registered” in a database. For example, in some embodiments, the digital fingerprint may comprise one or more fingerprint features which are stored as feature vectors. The database should be secure. In some embodiments, a unique ID such as a serial number also may be assigned to an object. An ID may be a convenient index in some applications. However, it is not essential, as a digital fingerprint itself can serve as a key for searching a database. In other words, by identifying an object by the unique features and characteristics of the object itself, arbitrary identifiers, labels, tags, etc. are unnecessary.



FIG. 2 is a simplified flow diagram of a process 200 for digital fingerprint generation and registration. In this case, the process begins with scanning the object, block 202. An image 250 is acquired, in this illustration an U.S. passport is used. The next step is to identify or generate an authentication region, block 204. For example, the authentication region may be the portion 252. The authentication region may be identified, as discussed above, from an object feature template (see Table 1). Next an object class of the object is determined, block 206. The result is used to check a database for a corresponding object class template, decision 208. If there is no matching template, the process proceeds to extract features, block 210 without the aid of a template. A digital fingerprint is created based on the extracted feature, block 212, and that digital fingerprint is stored in an object database 220. Alternatively, if a matching object class template is found at decision 208, the process continues to extract features, block 222, utilizing the class template 223 to identify and locate the features. Then, a digital fingerprint is created from the resulting feature data, block 224, and stored in a database 230.


Authentication and Inspection Processes

When an object is presented, it is scanned and an image is generated. At that point, the steps to be followed depend on the operation to be performed. Several illustrative cases are discussed below.


Case #1: For authentication of a previously fingerprinted object, the following steps may be followed (see FIG. 3, discussed below):

    • 1. The authentication region (or regions) is either determined automatically by the system, or by utilizing the authentication region definitions stored in an Object Feature Template.
    • 2. The relevant features are extracted from the authentication region(s) and the digital fingerprint is created. This will typically be in the form of feature vectors, but other data structures may be used as appropriate.
    • 3. Optionally, a unique identifier such as a serial number may be extracted and stored to augment subsequent search and identification functions.
    • 4. The digital fingerprint of the object to be authenticated is compared to the digital fingerprints stored in the database.
    • 5. The system reports whether the object is authentic; i.e. whether it matches one of the digital fingerprints stored in the database.
    • 6. The system may then store the digital fingerprint of the object to be authenticated in the database along with the results of the authentication process. Normally only the extracted features will be stored in the database, but the authentication image and/or the original image may be stored in the database for archival or audit purposes.



FIG. 3 illustrates such a process 300 in diagrammatic form. Beginning at start block 302, the process scans an object and creates an authentication image, block 304. The image is represented at 350, again using the passport example. Features are extracted, block 306, and optionally a serial number or similar ID number, preferably unique, may be extracted as well, block 310.


The extracted data is processed to create a digital fingerprint, bloc 312. An object database 320 may be queried for a matching fingerprint, block 314. A “match” may be defined by a probability or similarity metric. Results of the database query may be reported to a user, block 322. Finally, a new digital fingerprint may be stored into the database 320, shown at process block 330.


Case #2: For inspection of specific features of a previously fingerprinted object to determine whether they have been altered, the steps are similar to Case #1, but the process is used for the detection of alterations rather than authentication of the object:

    • 1. The authentication region (or regions) is either determined automatically by the system, or by utilizing the authentication region definitions stored in an Object Feature Template.
    • 2. The features to be inspected are extracted from the authentication region and the digital fingerprint is created. This will typically be in the form of feature vectors for the features to be inspected but other data structures may be used as appropriate.
    • 3. Optionally, a unique identifier such as a serial number may be extracted and stored to be used to augment subsequent search and identification functions.
    • 4. The digital fingerprint of the features to be inspected for alteration is compared to the fingerprint of the corresponding features from the original object stored in the database.
    • 5. The system reports whether the object has been altered; i.e. whether the digital fingerprint of the features to be inspected match those previously stored in the database from the original object.
    • 6. The system may then store the digital fingerprint of the features to be inspected in the database along with the results of the inspection process. Normally only the features will be stored in the database, but the authentication image and/or the original image may be stored in the database for archival or audit purposes.


Case #3: For inspection of the specific features of an object that has not been previously fingerprinted to determine whether the features have been altered, the following steps may be followed, referring now to FIG. 4.


The system scans the object, block 404, and creates an authentication image 450 that includes at least one authentication region. The authentication region (or regions) may be determined automatically by the system, or by utilizing the authentication region definitions defined in a stored Object Feature Template 406 as noted earlier. Either way, the process next extracts features from the authentication region(s), block 408, and a digital fingerprint is created. This will typically be in the form of feature vectors, but other data structures may be used as appropriate.


The features of the object are then analyzed, block 420, and examined for attributes indicative of a counterfeit, block 402. Methods may be applied to the features by running programs that are listed in the Object Feature Template that check features for signs of counterfeiting. Features can also be statistically compared to features of other objects of the same class that are stored in the database using Bayesian algorithms or other methods to find suspect variations that the standard methods may not catch. Optionally, a serial number or similar ID may be extracted, block 410.


The system preferably reports whether the object shows signs of alteration or counterfeiting, block 422. The system may then store the digital fingerprint of the object to be inspected, block 424, in the database 430 along with the results of the inspection process. Normally only the extracted features will be stored in the database, but the authentication image and/or the original image may be stored in the database for archival or audit purposes.


Case #4: For inspection of an object to determine whether it was manufactured in conformance with the manufacturer's specification, the following steps are followed; referring now to FIG. 5. The authentication region (or regions) for an object 502 is determined by utilizing the authentication region definitions stored in an Object Feature Template 506. In this illustration, the object 502 is a U.S. $100 bill. Scanning and creation of an authentication image are indicated at process block 504.


The manufacturing features are extracted from the regions of interest, block 508, and the digital fingerprint is created (not shown). This will typically be in the form of feature vectors for the manufacturing features, but other data structures may be used as appropriate. Optionally, a unique identifier such as a serial number may be extracted, block 510, and stored to be used to augment subsequent search and identification functions.


Next, the digital fingerprint of the manufacturing features of the object to be checked is are analyzed, block 520, and compared to a fingerprint of the manufacturing features from a reference object (i.e., a perfect manufactured object) stored in the database, illustrated at block 521. In other words, in some embodiments, a reference object may be “fingerprinted” and used as a proxy for manufacture specifications. In other cases, the digital fingerprint of the object, and more specifically the extracted feature vectors, may be compared to reference feature vectors that are based on manufacture specifications. This type of comparison speaks to quality of the object, but may not indicate provenance.


The system reports, block 522, whether the manufactured object meets specifications; i.e. whether the digital fingerprint of the manufacturing features sufficiently match those stored in the database from the reference object. The system may then store the digital fingerprint of the manufacturing features in the database 530, process block 524, along with the results of the manufacturing inspection process. Normally only the extracted manufacturing features will be stored in the database, but the manufacturing inspection image and/or the original image may be stored in the database for archival or audit purposes.


Because in all of the above cases we may be extracting features from images produced under variable lighting conditions, it is highly unlikely two different “reads” will produce the exact same digital fingerprint. In a preferred embodiment, the system is arranged to look up and match items in the database when there is a “near miss.” For example, two feature vectors [0, 1, 5, 5, 6, 8] and [0, 1, 6, 5, 6, 8] are not identical but by applying an appropriate difference metric the system can determine that they are close enough to say that they are from the same item that has been seen before. One example is to calculate Euclidean distance between the two vectors in multi-dimensional space, and compare the result to a threshold value. This is similar to the analysis of human fingerprints. Each fingerprint taken is slightly different, but the identification of key features allows a statistical match with a high degree of certainty.



FIG. 7 is a simplified flow diagram illustrating a method 700 for using a digital fingerprint of a suspect document to detect a potential forgery or alteration associated with the document. First, image data is acquired of a suspect document, block 702. Then the process extracts features from a selected region of the document, block 704. The extracted features are used to form a digital fingerprint, block 706. Next the digital fingerprint or the extracted features are used to form a query, and the query is used to access a forgery/alteration database in search of a matching record, block 708. If a matching record is returned, decision block 710, then the system may report a potential forgery or alteration associated with the suspect document, block 712. Optionally, multiple results may be combined in reaching a conclusion, block 720, where such are available.


Referring again to decision block 710, if no match is returned (i.e. no record matches the query criteria within a selected tolerance or confidence), then the process optionally may be repeated, block 714, for comparison to additional database records. In other words, the database search may be expanded, see loop 718. Again, multiple query results may be combined. Further, the entire process, defined by loop 730, may be repeated for inspecting and analyzing additional or different regions of the document, block 722. As discussed earlier, multiple regions of interest may be defined. Terminal conditions, not shown, may be implemented.



FIG. 10 is an illustration of an example of feature extraction from a digital image. The original image data, on the left, is searched by any of various software processes to detect and locate a feature in the image. In this case, the only important shape in this image is a square, and it extracts that feature, as shown in the middle of the figure, with “1” pixel values. (Most real implementations will have greater than one-bit pixel values.) Then, the extracted fingerprint feature may be stored as a feature vector as illustrated on the right side of the figure. A feature vector is an n-dimensional vector of numerical values that represent the shape.


In this approach, we may store only the features, not the entire image. In fact, after feature extraction the original image can be discarded. This has obvious advantages in terms of reduced storage requirements. Typical algorithms used for extracting features include but are not limited to edge detection, corner detection, blob detection, wavelet features; Gabor, gradient and steerable output filter histograms, scale-invariant feature transformation, active contours, shape contexts and parameterized shapes.


Referring now to FIG. 11, it illustrates essentially the same process for accessing a comparison or candidate image, extracting features from that image, and again storing each of them as a feature vector. Note that the example above is presented solely for the purpose of explanation, and actual implementations will vary. For instance, many shapes can be parameterized and stored even more efficiently. Instead of storing all the pixels along the boundaries, the square in FIG. 10 could actually just be stored as the lower left and upper right corner points ((x1, y1), (x2, y2)). Similarly, a circle could be stored with just the center point and radius. Feature vectors can store a wide variety of n-dimensional representations such as point, lines, polylines, edges, ridges, histograms and many others.


Once the features are extracted from the original image and the candidate image, the features can be compared directly to determine if there is a match. Typical algorithms for comparing features include but are not limited to nearest neighbor, hashing, indexing feature sets with visual vocabularies, support vector machines, multilayer perceptron and random forests and ferns. A comparison of these feature vectors is illustrated in FIG. 12, resulting in a match.



FIG. 13A illustrates an image of the numeral “3” representing a number printed on an “original” or known U.S. dollar bill. This bill may have been fingerprinted, for example, at the time of manufacture or public release, as described herein. As noted below, fingerprint databases of currency and the like may be secured. And such databases preferably exclude raw image data. This image, on the order of about 40× magnification, shows a couple of distinctive features visible to the naked eye.



FIG. 13B illustrates an image of a number printed on a second or unknown U.S. dollar bill. This second bill may be fingerprinted using the same process, and then the resulting digital fingerprints, i.e., the respective fingerprint feature vectors, may be compared as further explained below, to determine whether or not the second bill is in fact the same one as the first bill, even though it may have changed from wear and tear.



FIG. 14A is a simplified illustration of the results of feature extraction applied to the numeral 3 of FIG. 13A. (Only the ends of the numeral are shown.) Two areas of interest are called out by circles 1720 and 1750. Below we discuss how these areas may be selected in an image. Fingerprint feature extraction is applied to each of these circular regions. The results for each location are stored in fingerprint feature vectors. A collection of feature vectors, say for location 1750, may be stored as a feature vector array. FIG. 14B is a simplified illustration of the results of feature extraction applied to the numeral 3 of FIG. 13B. The same fingerprinting process is applied to this image. The same locations of interest as in FIG. 14A are labeled 1720 and 1760, respectively. Then the stored features (from the original object) are compared with the features extracted from the new object. As in this case, if the features are not encountered in the second object, it is not a match.


One of the key advantages to the feature-based method is that when the object is very worn from handling or use, the system can still identify the object as original, which may be impossible with the bitmapped approach. FIG. 15A shows the same dollar bill image as in FIG. 13A, juxtaposed with FIG. 15B for comparison. FIG. 15B shows the same bill after machine washing, perhaps in someone's pocket.


In FIG. 15B, the image (actually the dollar bill) has been degraded; there is significant loss of ink and destruction of the paper surface in multiple locations. A bit mapped approach would clearly fail to match up here, as the number of pixels that are different is significant—only relatively few of the pixels are the same as the original.



FIG. 16A shows the detail of two fingerprint feature locations as before, 1610 and 1650. FIG. 16B shows detail of the damaged bill with the corresponding locations called out as 1620 and 1660, respectively. Here, one can see visually why a comparison of the corresponding fingerprint feature vectors would be adequate to result in a match. In practice, a much larger number of features would be used.


The image of the damaged bill is analyzed by a processor. The processor accesses a database of previously stored fingerprint data. If the dollar bill serial number is legible (by eye or machine), the record for the corresponding bill may be accessed from the datastore using the serial number as an index. Similarly, if any portion of the serial number is legible, the search for a matching record can be narrowed on that basis. Either way, a candidate record, containing a set of stored regions of interest may be compared to the suspect image.


As explained above, in addition to being able to recognize a worn object, the feature-based approach can deal with problems like rotated images. This is especially important in a system where the retail customer may be taking a picture of an object to be authenticated. In this case external factors like lighting and rotation are not under the manufacturer's control.


Referring now to FIG. 17, it shows the original image on the left side, with a small set of fingerprint features marked as small diamond shapes. This is merely a callout symbol for illustration. In a preferred implementation, as noted, circular areas are used. For each feature (preferably identified in the database record), a search is conducted of the suspect image on the right side of FIG. 17 for a matching feature. The position may not match exactly, due to “stretch”—an effective difference in magnification, and/or due to rotation of the image. Although it may not match locations literally; a mathematical transformation can be defined that maps one image to the other, thereby accounting for rotation and stretch as appropriate. Thus, a bounding rectangle A indicated by the box in the left side image may be mapped to a quadrilateral indicated by the line B in the right side image.


Once an appropriate transformation is found, further matching can be done to increase the level of confidence of the match if desired. In some applications, a number of matches on the order of tens or hundreds of match points is sufficient. On the other hand, the number of non-match points also should be taken into account. That number should be relatively very low, but it may be non-zero due to random dirt, system “noise” and the like. Preferably, the allowed mapping or transformation should be restricted depending on the type of objects under inspection. For instance, some objects may be inflexible, which may restrict the possible deformations of the object.


To summarize the imaging requirements for a typical fingerprinting system, for example for inspecting documents, it should provide sufficient imaging capability to show invariant features. The particulars will depend on the regions used for authentication. For many applications, 10× magnification is adequate. For ink bleeds on passports, bills and other high-value authentication, 40× power is more than sufficient. In preferred embodiments, the software should implement a flexible response to accommodate misalignment (rotation), orientation and scale changes. Color imaging and analysis is generally not required for using the processes described above.


Hardware and Software

Most of the equipment discussed above comprises hardware and associated software. For example, the typical portable device is likely to include one or more processors and software executable on those processors to carry out the operations described. We use the term software herein in its commonly understood sense to refer to programs or routines (subroutines, objects, plug-ins, etc.), as well as data, usable by a machine or processor. As is well known, computer programs generally comprise instructions that are stored in machine-readable or computer-readable storage media. Some embodiments of the present invention may include executable programs or instructions that are stored in machine-readable or computer-readable storage media, such as a digital memory. We do not imply that a “computer” in the conventional sense is required in any particular embodiment. For example, various processors, embedded or otherwise, may be used in equipment such as the components described herein.


Memory for storing software again is well known. In some embodiments, memory associated with a given processor may be stored in the same physical device as the processor (“on-board” memory); for example, RAM or FLASH memory disposed within an integrated circuit microprocessor or the like. In other examples, the memory comprises an independent device, such as an external disk drive, storage array, or portable FLASH key fob. In such cases, the memory becomes “associated” with the digital processor when the two are operatively coupled together, or in communication with each other, for example by an I/O port, network connection, etc. such that the processor can read a file stored on the memory. Associated memory may be “read only” by design (ROM) or by virtue of permission settings, or not. Other examples include but are not limited to WORM, EPROM, EEPROM, FLASH, etc. Those technologies often are implemented in solid state semiconductor devices. Other memories may comprise moving parts, such as a conventional rotating disk drive. All such memories are “machine readable” or “computer-readable” and may be used to store executable instructions for implementing the functions described herein.


A “software product” refers to a memory device in which a series of executable instructions are stored in a machine-readable form so that a suitable machine or processor, with appropriate access to the software product, can execute the instructions to carry out a process implemented by the instructions. Software products are sometimes used to distribute software. Any type of machine-readable memory, including without limitation those summarized above, may be used to make a software product. That said, it is also known that software can be distributed via electronic transmission (“download”), in which case there typically will be a corresponding software product at the transmitting end of the transmission, or the receiving end, or both.


Integration with Bill Processing Equipment

We propose creation of fingerprint data at the U.S. Treasury or any other producer (printer) of negotiable bills or notes. Preferably, such a system utilizes random, microscopic features unique to each bill's paper and printing. For example, the system may extract features from unpublished locations on the bills. In other words, the specific locations used for authentication are maintained in secrecy. The extracted features may be converted into encrypted feature vectors and associated in a data store with the corresponding bill serial number (the serial number having been readily captured by the same scanner or a separate one). In this way, a protected database may be created that is addressable or searchable by serial number or feature vector, but only by authorized users. (Here, a “user” may be a machine with electronic access to the database.)


Equipment is known for stacking, counting, and “strapping” paper money notes. A “strap” is a package of 100 notes, held together by a single paper band, as required for deposit by U.S. Federal Reserve rules. Various note handling equipment may be modified to include a digital scanner, for example, an optical scanner, to capture images of each bill or note as it is processed. The scanner may be coupled to a suitable processor, as explained above, for storing the captured images, and for processing the images to authenticate them and/or to detect counterfeit items.


Preferably, such a system is granted access to the protected database that is searchable by serial number or digital fingerprint. It may then look up each bill scanned, and compare features of the digital image to the digital fingerprint stored in protected database for the corresponding serial number. This process may be done by batch or in real time or near real time. The comparison, as further described above, may provide a confidence metric, or a simple yes/no (authentic/counterfeit) result for each note. It may identify a counterfeit note by serial number, but also by sequence number to facilitate locating the bill (“the 28th bill in the strap #218”). In this way, a bank or other institution can detect counterfeit notes in a timely manner.


In another embodiment, a scanner, which may be portable and optionally wireless, may be made available at a bank teller station for the teller to authenticate bills presented for deposit or exchange. Further, such a system may be installed at an ATM machine to automatically authenticate bills presented for deposit. The ATM may be programmed to accept the bills to get them “off the street” but flag them as counterfeit or suspect.


The term “note” is commonly used in the U.K. with regard to paper money, while the term “bill” is more common in the U.S. We use them interchangeably here. Not to be confused with U.S. Treasury “bills” and “notes” which are not currency but debt instruments. That said, the inventions disclosed herein are applicable to those as well as currency, although nowadays such things are mainly processed by electronic “book entries” rather than paper documents. Older U.S. Savings Bonds, and any other bearer instruments in any country, can all be authenticated by various embodiments of the present invention.


Having described and illustrated the principles of the disclosure and some illustrative embodiments thereof, it should be apparent that the invention may be modified in arrangement and detail without departing from such principles. For convenience, we summarize below some aspects of the disclosure. The following list is merely illustrative and not intended to limit or define all the inventions disclosed. The scope of the present invention should, therefore, be determined only by the following claims.

Claims
  • 1. A machine comprising: an input device that captures image data of at least a portion of a suspect physical object; anda digital fingerprint system having a processor and a memory wherein the processor is configured tocreate a digital fingerprint of the suspect physical object based on the image data, wherein the digital fingerprint includes a first piece of data identifying a plurality of regions of interest of the suspect physical object and, for each region of interest, a second piece of data forming at least one fingerprint feature vector that describes a fingerprint feature extracted from the region of interest of the suspect physical object;store the created digital fingerprint for the suspect physical object in a database, the database having stored therein multiple digital fingerprints associated with multiple physical objects, wherein the digital fingerprint identifies the suspect physical object as being unique among the multiple physical objects.
  • 2. The machine of claim 1, wherein the input device includes at least one optical sensor.
  • 3. The machine of claim 1, wherein the fingerprint feature includes manufacturing features of the suspect physical object.
  • 4. The machine of claim 1, wherein the processor of the digital fingerprint system is further configured to extract a unique identifier from the suspect physical object; and store the unique identifier in the database to augment search and identification of the suspect physical object.
  • 5. The machine of claim 1, wherein the input device further captures image data of at least a portion of a reference physical object that is manufactured according to a specification of a manufacturer; and the processor of the digital fingerprint system is further configured to create a digital fingerprint of the reference physical object, wherein the digital fingerprint of the reference physical object includes a first piece of data identifying a plurality of regions of interest of the reference physical object and, for each region of interest, a second piece of data forming at least one reference feature vector that describes a fingerprint feature extracted from the region of interest of the reference physical object, wherein the digital fingerprint of the reference physical object serves as a proxy for the specification of the manufacturer; andstore the digital fingerprint of the reference physical object in the database.
  • 6. The machine of claim 5, wherein processor of the digital fingerprint system is further configured to compare the digital fingerprint of the suspect physical object to the digital fingerprint of the reference physical object to generate a result that determines whether the suspect physical object was manufactured in conformance with the specification of the manufacturer; determine the result indicates a match between the at least one fingerprint feature vector included in the digital fingerprint of suspect physical object and the at least one reference feature vector included in the digital fingerprint of the reference physical object; anddetermine the suspect physical object is manufactured in conformance with the specification of the manufacturer based on the match.
  • 7. The machine of claim 1, wherein the fingerprint feature of the suspect physical object includes dimensional qualities associated with at least one physical feature of the suspect physical object.
  • 8. The machine of claim 1, wherein the fingerprint feature of the suspect physical object includes non-alpha-numeric, optically discernible features of the suspect physical object.
  • 9. The machine of claim 1, wherein the fingerprint feature of the suspect physical object includes multiple known indicia of a counterfeit or forged object.
  • 10. The machine of claim 1, wherein the processor of the digital fingerprint system is further configured to parse the image data into the plurality of regions of interest; collect a piece of region-of-interest data from each region of interest of the image data;generate identifier data for each region of interest, the identifier data pairing an identifier with the region-of-interest data of a corresponding region of interest;andassociate the identifier data with the digital fingerprint of the suspect physical object.
  • 11. The machine of claim 10, wherein the database includes at least one list of classification elements and individual element values for each of the classification elements, wherein each classification element and its corresponding individual element values together define a corresponding pattern, and wherein the database is arranged to support searching a plurality of patterns for a match based on identifier data associated with digital fingerprints stored in the database.
  • 12. The machine of claim 1, wherein the digital fingerprint of the suspect physical object is based on image data of the portion of the suspect physical object, wherein the image data is independent from identifying information about the suspect physical object included in tags, labels, or other materials added to the suspect physical object.
  • 13. The machine of claim 1, wherein the processor of the digital fingerprint system is further configured to extract information from the image data;categorize the extracted information to determine a category; andinclude a category indicator in the digital fingerprint of the suspect physical object.
  • 14. The machine of claim 1, wherein the pieces of data of the digital fingerprint of the suspect physical object are responsive to one or more characteristics appearing in a presentable image generated from the image data, the one or more characteristics including a plurality of markings discernible in the presentable image together with respective region data of each of the plurality of markings.
  • 15. The machine of claim 14, wherein the pieces of data of the digital fingerprint of the suspect physical object include size data for each of the plurality of markings.
  • 16. The machine of claim 14, wherein the pieces of data of the digital fingerprint of the suspect physical object include dimension data for each of the plurality of markings, the dimension data defined, at least in part, by corresponding dimensions of each respective marking.
  • 17. The machine of claim 14, wherein the pieces of data of the digital fingerprint of the suspect physical object include pixel-count data for each of the plurality of markings, the pixel-count data defined, at least in part, by a total pixel count of each respective marking.
  • 18. A system comprising: a processor and memory connected to each other;a database connected to the processor, wherein the database stores multiple digital fingerprints associated with multiple physical objects;an input device connected to the processor;a network interface configured to communicate with the processor and the input device over a network; anda plurality of lines of instructions stored in the memory and executed by the processor of the system that is configured toacquire, from the input device, image data of at least a portion of a suspect physical object;create a digital fingerprint of the suspect physical object based on the image data, wherein the digital fingerprint includes a first piece of data identifying a plurality of regions of interest of the suspect physical object and, for each region of interest, a second piece of data forming at least one fingerprint feature vector that describes a fingerprint feature extracted from the region of interest; andstore the digital fingerprint in the database, wherein the digital fingerprint identifies the suspect physical object as being unique among the multiple physical objects.
  • 19. The system of claim 18, wherein the fingerprint feature includes manufacturing features of the suspect physical object.
  • 20. The system of claim 18, wherein the processor is further configured to acquire, from the input device, image data of at least a portion of a reference physical object that is manufactured according to a specification of a manufacturer; create a digital fingerprint of the reference object, wherein the digital fingerprint of the reference physical object includes a first piece of data identifying a plurality of regions of interest of the reference physical object and, for each region of interest, a second piece of data forming at least one reference feature vector that describes a fingerprint feature extracted from the region of interest of the reference physical object; andstore the digital fingerprint of the reference physical object in the database.
  • 21. The system of claim 20, wherein the processor is further configured to compare the digital fingerprint of the suspect physical object to the digital fingerprint of the reference physical object to generate a result that determines whether the suspect physical object was manufactured in conformance with the specification of the manufacturer; determine the result indicates a match between the at least one fingerprint feature vector included in the digital fingerprint of the suspect physical object and the at least one reference feature vector included in the digital fingerprint of the reference physical object; anddetermine the suspect physical object is manufactured in conformance with the specification of the manufacturer based on the match.
RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/862,556, filed Jan. 4, 2018, which is a continuation of U.S. application Ser. No. 15/208,328, filed Jul. 12, 2016 know U.S. Pat. No. 10,192,140), which is a divisional of U.S. application Ser. No. 14/531,724, filed Nov. 3, 2014 (now U.S. Pat. No. 9,443,298), which is a non-provisional, pursuant to 35 U.S.C. § 119(e), of U.S. provisional application No. 61/914,722 filed Dec. 11, 2013 and U.S. provisional application No. 61/898,780 filed Nov. 1, 2013, both incorporated herein by reference. application Ser. No. 14/531,724 is a continuation-in-part of U.S. application Ser. No. 14/290,653 filed May 29, 2014 (now U.S. Pat. No. 9,350,552) which is a continuation of U.S. application Ser. No. 13/410,753 filed Mar. 2, 2012 (now U.S. Pat. No. 8,774,455) which is a non-provisional, pursuant to 35 U.S.C. § 119(e), of U.S. provisional application No. 61/448,465 filed Mar. 2, 2011, each of which is incorporated herein by reference.

US Referenced Citations (318)
Number Name Date Kind
4218674 Brosow et al. Aug 1980 A
4423415 Goldman Dec 1983 A
4677435 Causse et al. Jun 1987 A
4700400 Ross Oct 1987 A
4883971 Jensen Nov 1989 A
4921107 Hofer May 1990 A
5031223 Rosenbaum et al. Jul 1991 A
5079714 Manduley et al. Jan 1992 A
5393939 Nasuta et al. Feb 1995 A
5422821 Allen et al. Jun 1995 A
5514863 Williams May 1996 A
5518122 Tilles et al. May 1996 A
5521984 Denenberg et al. May 1996 A
5703783 Allen et al. Dec 1997 A
5719939 Tel Feb 1998 A
5734568 Borgendale et al. Mar 1998 A
5745590 Pollard Apr 1998 A
5883971 Bolle et al. Mar 1999 A
5923848 Goodhand et al. Jul 1999 A
5974150 Kaish et al. Oct 1999 A
6205261 Goldberg Mar 2001 B1
6246794 Kagehiro et al. Jun 2001 B1
6292709 Uhl et al. Sep 2001 B1
6327373 Yura Dec 2001 B1
6343327 Daniels et al. Jan 2002 B2
6360001 Berger et al. Mar 2002 B1
6370259 Hobson et al. Apr 2002 B1
6400805 Brown et al. Jun 2002 B1
6424728 Ammar Jul 2002 B1
6434601 Rollins Aug 2002 B1
6470091 Koga et al. Oct 2002 B2
6539098 Baker et al. Mar 2003 B1
6549892 Sansone Apr 2003 B1
6597809 Ross et al. Jul 2003 B1
6643648 Ross et al. Nov 2003 B1
6697500 Woolston et al. Feb 2004 B2
6741724 Bruce et al. May 2004 B1
6768810 Emanuelsson et al. Jul 2004 B2
6778703 Zlotnick Aug 2004 B1
6805926 Cote et al. Oct 2004 B2
6816602 Coffelt et al. Nov 2004 B2
6829369 Poulin et al. Dec 2004 B2
6961466 Imagawa et al. Nov 2005 B2
6985925 Ogawa Jan 2006 B2
6985926 Ferlauto et al. Jan 2006 B1
7016532 Boncyk et al. Mar 2006 B2
7031519 Elmenhurst Apr 2006 B2
7096152 Ong Aug 2006 B1
7120302 Billester Oct 2006 B1
7121458 Avant et al. Oct 2006 B2
7152047 Nagel Dec 2006 B1
7171049 Snapp Jan 2007 B2
7204415 Payne et al. Apr 2007 B2
7212949 Bachrach May 2007 B2
7333987 Ross et al. Feb 2008 B2
7343623 Ross Mar 2008 B2
7356162 Caillon Apr 2008 B2
7379603 Ross et al. May 2008 B2
7436979 Bruce et al. Oct 2008 B2
7477780 Boncyk et al. Jan 2009 B2
7518080 Amato Apr 2009 B2
7602938 Prokoski Oct 2009 B2
7674995 Desprez et al. Mar 2010 B2
7676433 Ross et al. Mar 2010 B1
7680306 Boutant et al. Mar 2010 B2
7720256 Desprez et al. May 2010 B2
7726457 Maier et al. Jun 2010 B2
7726548 Delavergne Jun 2010 B2
7748029 Ross Jun 2010 B2
7822263 Prokoski Oct 2010 B1
7834289 Orbke et al. Nov 2010 B2
7853792 Cowburn Dec 2010 B2
8022832 Vogt et al. Sep 2011 B2
8032927 Ross Oct 2011 B2
8108309 Tan Jan 2012 B2
8180174 Di et al. May 2012 B2
8180667 Baluja et al. May 2012 B1
8194938 Wechsler et al. Jun 2012 B2
8316418 Ross Nov 2012 B2
8374020 Katti Feb 2013 B2
8374399 Talwerdi Feb 2013 B1
8374920 Hedges et al. Feb 2013 B2
8391583 Mennie et al. Mar 2013 B1
8428772 Miette et al. Apr 2013 B2
8437530 Mennie et al. May 2013 B1
8457354 Kolar et al. Jun 2013 B1
8477992 Paul et al. Jul 2013 B2
8520888 Spitzig et al. Aug 2013 B2
8526743 Campbell et al. Sep 2013 B1
8774455 Elmenhurst et al. Jul 2014 B2
8959029 Jones et al. Feb 2015 B2
9031329 Farid et al. May 2015 B1
9058543 Campbell et al. Jun 2015 B2
9152862 Ross et al. Oct 2015 B2
9170654 Boncyk et al. Oct 2015 B2
9224196 Duerksen et al. Dec 2015 B2
9234843 Sopori et al. Jan 2016 B2
9245133 Durst et al. Jan 2016 B1
9350552 Elmenhurst May 2016 B2
9350714 Freeman et al. May 2016 B2
9361507 Hoyos et al. Jun 2016 B1
9361596 Ross et al. Jun 2016 B2
9424461 Yuan et al. Aug 2016 B1
9443298 Ross et al. Sep 2016 B2
9558463 Ross et al. Jan 2017 B2
9582714 Ross et al. Feb 2017 B2
9646206 Ross et al. May 2017 B2
9665800 Kuffner May 2017 B1
9741724 Seshadri et al. Aug 2017 B2
10037537 Withrow et al. Jul 2018 B2
10043073 Ross et al. Aug 2018 B2
10192140 Ross Jan 2019 B2
10199886 Li et al. Feb 2019 B2
10346852 Ross et al. Jul 2019 B2
10505726 Andon et al. Dec 2019 B1
10540664 Ross et al. Jan 2020 B2
10572883 Ross et al. Feb 2020 B2
10614302 Withrow et al. Apr 2020 B2
10621594 Land et al. Apr 2020 B2
10740767 Withrow Aug 2020 B2
10872265 Ross Dec 2020 B2
10936838 Wong Mar 2021 B1
11238146 Ross Feb 2022 B2
20010010334 Park et al. Aug 2001 A1
20010054031 Lee et al. Dec 2001 A1
20020015515 Lichtermann et al. Feb 2002 A1
20020073049 Dutta Jun 2002 A1
20020134836 Cash et al. Sep 2002 A1
20020168090 Bruce et al. Nov 2002 A1
20030015395 Hallowell et al. Jan 2003 A1
20030046103 Amato et al. Mar 2003 A1
20030091724 Mizoguchi May 2003 A1
20030120677 Vernon Jun 2003 A1
20030138128 Rhoads Jul 2003 A1
20030179931 Sun Sep 2003 A1
20030182018 Snapp Sep 2003 A1
20030208298 Edmonds Nov 2003 A1
20030219145 Smith Nov 2003 A1
20040027630 Lizotte Feb 2004 A1
20040101174 Sato et al. May 2004 A1
20040112962 Farrall et al. Jun 2004 A1
20040218791 Jiang et al. Nov 2004 A1
20040218801 Houle et al. Nov 2004 A1
20040250085 Tattan et al. Dec 2004 A1
20050007776 Monk et al. Jan 2005 A1
20050038756 Nagel Feb 2005 A1
20050065719 Khan et al. Mar 2005 A1
20050086256 Owens et al. Apr 2005 A1
20050111618 Sommer et al. May 2005 A1
20050119786 Kadaba Jun 2005 A1
20050125360 Tidwell et al. Jun 2005 A1
20050131576 De et al. Jun 2005 A1
20050137882 Cameron et al. Jun 2005 A1
20050160271 Brundage et al. Jul 2005 A9
20050169529 Owechko et al. Aug 2005 A1
20050188213 Xu Aug 2005 A1
20050204144 Mizutani Sep 2005 A1
20050251285 Boyce et al. Nov 2005 A1
20050257064 Boutant et al. Nov 2005 A1
20050289061 Kulakowski et al. Dec 2005 A1
20060010503 Inoue et al. Jan 2006 A1
20060083414 Neumann et al. Apr 2006 A1
20060109520 Gossaye et al. May 2006 A1
20060131518 Ross et al. Jun 2006 A1
20060177104 Prokoski Aug 2006 A1
20060253406 Caillon Nov 2006 A1
20070056041 Goodman Mar 2007 A1
20070071291 Yumoto et al. Mar 2007 A1
20070085710 Bousquet et al. Apr 2007 A1
20070094155 Dearing Apr 2007 A1
20070211651 Ahmed et al. Sep 2007 A1
20070211964 Agam et al. Sep 2007 A1
20070230656 Lowes et al. Oct 2007 A1
20070263267 Ditt Nov 2007 A1
20070269043 Launay et al. Nov 2007 A1
20070282900 Owens et al. Dec 2007 A1
20080005578 Shafir Jan 2008 A1
20080008377 Andel et al. Jan 2008 A1
20080011841 Self et al. Jan 2008 A1
20080013804 Moon et al. Jan 2008 A1
20080016355 Beun et al. Jan 2008 A1
20080128496 Bertranou et al. Jun 2008 A1
20080130947 Ross et al. Jun 2008 A1
20080219503 Di et al. Sep 2008 A1
20080250483 Lee Oct 2008 A1
20080255758 Graham et al. Oct 2008 A1
20080272585 Conard et al. Nov 2008 A1
20080290005 Bennett et al. Nov 2008 A1
20080294474 Furka Nov 2008 A1
20090028379 Belanger et al. Jan 2009 A1
20090057207 Orbke et al. Mar 2009 A1
20090106042 Maytal et al. Apr 2009 A1
20090134222 Ikeda May 2009 A1
20090154778 Lei et al. Jun 2009 A1
20090157733 Kim et al. Jun 2009 A1
20090223099 Versteeg Sep 2009 A1
20090232361 Miller Sep 2009 A1
20090245652 Bastos Oct 2009 A1
20090271029 Doutre Oct 2009 A1
20090287498 Choi Nov 2009 A2
20090307005 Omartin et al. Dec 2009 A1
20100027834 Spitzig et al. Feb 2010 A1
20100054551 Decoux Mar 2010 A1
20100070527 Chen Mar 2010 A1
20100104200 Baras et al. Apr 2010 A1
20100157064 Cheng et al. Jun 2010 A1
20100163612 Caillon Jul 2010 A1
20100166303 Rahimi Jul 2010 A1
20100174406 Miette et al. Jul 2010 A1
20100286815 Zimmermann Nov 2010 A1
20100289627 McAllister Nov 2010 A1
20110026831 Perronnin et al. Feb 2011 A1
20110064279 Uno Mar 2011 A1
20110081043 Sabol et al. Apr 2011 A1
20110091068 Stuck et al. Apr 2011 A1
20110161117 Busque et al. Jun 2011 A1
20110188709 Gupta et al. Aug 2011 A1
20110194780 Li et al. Aug 2011 A1
20110235920 Iwamoto et al. Sep 2011 A1
20110267192 Goldman et al. Nov 2011 A1
20120042171 White et al. Feb 2012 A1
20120089639 Wang Apr 2012 A1
20120130868 Loeken May 2012 A1
20120177281 Frew Jul 2012 A1
20120185393 Atsmon et al. Jul 2012 A1
20120199651 Glazer Aug 2012 A1
20120242481 Gernandt et al. Sep 2012 A1
20120243797 Di Venuto Dayer et al. Sep 2012 A1
20120250945 Peng et al. Oct 2012 A1
20130110719 Carter et al. May 2013 A1
20130162394 Etchegoyen Jun 2013 A1
20130212027 Sharma et al. Aug 2013 A1
20130214164 Zhang et al. Aug 2013 A1
20130256415 Callegari Oct 2013 A1
20130273968 Rhoads et al. Oct 2013 A1
20130277425 Sharma et al. Oct 2013 A1
20130284803 Wood et al. Oct 2013 A1
20140032322 Schwieger et al. Jan 2014 A1
20140140570 Ross et al. May 2014 A1
20140140571 Elmenhurst et al. May 2014 A1
20140184843 Campbell et al. Jul 2014 A1
20140201094 Herrington et al. Jul 2014 A1
20140270341 Elmenhurst et al. Sep 2014 A1
20140314283 Harding Oct 2014 A1
20140380446 Niu et al. Dec 2014 A1
20150058142 Lenahan et al. Feb 2015 A1
20150067346 Ross et al. Mar 2015 A1
20150078629 Gottemukkula et al. Mar 2015 A1
20150086068 Mulhearn et al. Mar 2015 A1
20150110364 Niinuma et al. Apr 2015 A1
20150117701 Ross et al. Apr 2015 A1
20150127430 Hammer May 2015 A1
20150248587 Oami et al. Sep 2015 A1
20150294189 Benhimane et al. Oct 2015 A1
20150309502 Breitgand Oct 2015 A1
20150371087 Ross et al. Dec 2015 A1
20160034913 Zavarehi et al. Feb 2016 A1
20160034914 Gonen et al. Feb 2016 A1
20160055651 Oami Feb 2016 A1
20160057138 Hoyos et al. Feb 2016 A1
20160072626 Kouladjie Mar 2016 A1
20160117631 McCloskey et al. Apr 2016 A1
20160162734 Ross et al. Jun 2016 A1
20160180485 Avila et al. Jun 2016 A1
20160180546 Kim et al. Jun 2016 A1
20160189510 Hutz Jun 2016 A1
20160203387 Lee et al. Jul 2016 A1
20160300234 Moss-Pultz et al. Oct 2016 A1
20160335520 Ross et al. Nov 2016 A1
20170004444 Krasko et al. Jan 2017 A1
20170032285 Sharma et al. Feb 2017 A1
20170076132 Sezan et al. Mar 2017 A1
20170132458 Short et al. May 2017 A1
20170153069 Huang et al. Jun 2017 A1
20170243230 Ross et al. Aug 2017 A1
20170243231 Withrow et al. Aug 2017 A1
20170243232 Ross et al. Aug 2017 A1
20170243233 Land et al. Aug 2017 A1
20170249491 Macintosh et al. Aug 2017 A1
20170251143 Peruch et al. Aug 2017 A1
20170253069 Kerkar et al. Sep 2017 A1
20170295301 Liu et al. Oct 2017 A1
20170300905 Withrow et al. Oct 2017 A1
20170344823 Withrow et al. Nov 2017 A1
20170344824 Martin Nov 2017 A1
20170372327 Withrow Dec 2017 A1
20180000359 Watanabe Jan 2018 A1
20180012008 Withrow et al. Jan 2018 A1
20180018627 Ross et al. Jan 2018 A1
20180018838 Fankhauser et al. Jan 2018 A1
20180024074 Ranieri et al. Jan 2018 A1
20180024178 House et al. Jan 2018 A1
20180047128 Ross et al. Feb 2018 A1
20180053312 Ross et al. Feb 2018 A1
20180121643 Talwerdi et al. May 2018 A1
20180144211 Ross et al. May 2018 A1
20180315058 Withrow et al. Nov 2018 A1
20180349694 Ross et al. Dec 2018 A1
20190026581 Leizerson Jan 2019 A1
20190034518 Liu et al. Jan 2019 A1
20190034694 Ross Jan 2019 A1
20190102873 Wang et al. Apr 2019 A1
20190102973 Oyama et al. Apr 2019 A1
20190130082 Alameh et al. May 2019 A1
20190228174 Withrow et al. Jul 2019 A1
20190266373 Hirokawa Aug 2019 A1
20190279017 Graham et al. Sep 2019 A1
20190287118 Ross et al. Sep 2019 A1
20190342102 Hao et al. Nov 2019 A1
20190362186 Irshad et al. Nov 2019 A1
20200153822 Land et al. May 2020 A1
20200226366 Withrow et al. Jul 2020 A1
20200233901 Crowley et al. Jul 2020 A1
20200250395 Ross et al. Aug 2020 A1
20200257791 Shannon et al. Aug 2020 A1
20200334689 Withrow Oct 2020 A1
20200349379 Ross Nov 2020 A1
20200356772 Withrow et al. Nov 2020 A1
Foreign Referenced Citations (44)
Number Date Country
102006005927 Aug 2007 DE
0439669 Aug 1991 EP
0759596 Feb 1997 EP
1016548 Jul 2000 EP
1016549 Jul 2000 EP
1719070 Apr 2009 EP
2107506 Oct 2009 EP
2166493 Mar 2010 EP
2195621 Nov 2013 EP
2866193 Apr 2015 EP
2257909 May 2015 EP
2869240 May 2015 EP
2869241 May 2015 EP
3208744 Aug 2017 EP
3249581 Nov 2017 EP
3267384 Jan 2018 EP
3270342 Jan 2018 EP
3435287 Jan 2019 EP
3514715 Jul 2019 EP
2097979 Nov 1982 GB
2482127 Jan 2012 GB
61234481 Oct 1986 JP
H07192112 Jul 1995 JP
2007213148 Aug 2007 JP
2010146158 Jul 2010 JP
20120009654 Feb 2012 KR
2005086616 Sep 2005 WO
2006038114 Apr 2006 WO
2007028799 Mar 2007 WO
2007031176 Mar 2007 WO
2007071788 Jun 2007 WO
2007090437 Aug 2007 WO
2007144598 Dec 2007 WO
2009030853 Mar 2009 WO
2009089126 Jul 2009 WO
2009115611 Sep 2009 WO
2010018464 Feb 2010 WO
2010018646 Feb 2010 WO
2012145842 Nov 2012 WO
2013051019 Apr 2013 WO
2013126221 Aug 2013 WO
2013173408 Nov 2013 WO
2015004434 Jan 2015 WO
2016081831 May 2016 WO
Non-Patent Literature Citations (50)
Entry
Stern et al., EMFORCED: EM-Based Fingerprinting Framework for Remarked and Cloned Counterfeit IC Detection Using Machine Learning Classification, 1063-8210 2019 IEEE, pp. 363-375. (Year: 2020).
Anonymous, “Intrinsic Characteristics for Authentication” & “Alp Vision Advances Security Through Digital Technology,” Authentication News vol. 12, (No. 9) pp. 2, 7 and 8, dated Sep. 2006, 3 pages total.
Beekhof et al., “Secure Surface Identification Codes,” Proceeding of the SPIE 6819: Security Forensics, Steganography, and Watermarking of Multimedia Contents X:68190D, 2008. (12 pages).
Boa et al., “Local Feature based Multiple Object Instance Identification using Scale and Rotation Invariant Implicit Shape Model,” 12th Asian Conference on Computer Vision, Singapore, Singapore, Nov. 1-5, 2014, pp. 600-614.
Buchanan et al., “Fingerprinting documents and packaging,” Nature 436 (7050): 475, 2005.
Cavoukian et al.; “Biometric Encryption: Technology for Strong Authentication Security and Privacy, Office of the Info. and Privacy Commissioner, Toronto, Ontario, CA,” 2008, in WE, International Federation Iot Information Processing, Vo. 261; pp. 57-77.
Di Paola et al., “An Autonomous Mobile Robotic System for Surveillance of Indoor Environments,” International Journal of Advanced Robotic Systems 7(1): 19-26, 2010.
Drew, M. S., et al., “Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image,” Color and Imaging Conference, vol. 5, Society for Imaging Science and Technology, 2009, pp. 267-271.
Ebay, “eBay Launches Must-Have iPhone App RedLaser 3.0” https:/www.ebayinc.com/stories/news/ebay-launches-musthave-iphon-app-redlaser30/, Nov. 18, 2011 (Year: 2011), 8 pages.
Entropy.com Website History, Wayback Machine; https://web.archive.org/web/20 I 60330060808/https://www.entrupy.com/; Mar. 30, 2016 (Year: 2016), 2 pages.
Farid, “Digital Image Forensics”, Dartmouth CS 89/189, Sprint 2013; 199 pages.
Fischler et al., “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communication of the ACM 24(6); 381-395, 1981.
Huang et al., “A Novel Binarization Algorithm for Ballistic Imaging Systems,” 3rd International Congress on Image and Signal Processing, Yantai, China, Oct. 16-18, 2010, pp. 1287-1291.
Huang, et al., “An Online Ballistics Imaging System for Firearm Identification”; 2nd International Conference on Signal Processing Systems, Dalian, China, Jul. 5-7, 2010, vol. 2, pp. 68-72.
Kartik et al., “Security System with Face Recognition, SMS Alert and Embedded Network Video Monitoring Terminal,” International Journal of Security, Privacy and Trust Management 2(5):9-19, 2013.
Li, “Image Processing for the Positive Identification of Forensic Ballistics Specimens,” Proceedings of the 6th International Conference of Information Fusion, Cairns, Australia, Jul. 8-11, 2003, pp. 1494-1498.
Li, “Firearm Identification System Based on Ballistics Image Processing,” Congress on Image and Signal Processing, School of Computer and Information Science, Faculty of Computing, Health and Science Edith Cowan University, Perth, Australia pp. 149-154.
Maddern et al., “Illumination Invariant Imaging: Applications in Robust Vision-based Localization, Mapping and Classification for Autonomous Vehicles,” IEEE International Conference on Robotics and Automation, Hong Kong, May 31-Jun. 7, 2014, 8 pages.
Matsumoto et al., “Nano-artifact metrics based on random collapse of resist,” Scientific Reports 4:6142, 2014 (5 pages).
Mistry et al., “Comparison of Feature Detection and Matching Approaches: SIFT and SURF,” Global Research and Development Journal for Engineering, vol. 2, Issue 4, Mar. 2017, 8 pages.
NCOA Link at http:/ /ribbs.usps.gov/ncoalink/ncoalink_print.htm; dated May 27, 2009; 3 pages.
Online NCOALink® Processing Acknowledgement Form (PAF) Released by Lorton Data, Jun. 2, 2009, URL=http://us.generation-nt.com/online-ncoalink-processing-acknowledgement-form-paf-released-by-press-1567191.html, download date Jun. 25, 2010, 1 page.
Rublee et al., “ORB: an efficient alternative to SIFT or SURF,” IEEE International Conference on Computer Vision, Barcelona, Spain, Nov. 6-13, 2011, 8 pages.
Schneider et al., “A Robust Content Based Digital Signature for Image Authentication,” Proceeding of the International Conference on Image Processing Lausanne, Switzerland, Sep. 19, 1996, pp. 227-230.
Schwabe Williamson & Wyatt, PC—Listing of Related Cases; dated Sep. 16, 2017; 2 pages.
Sharma et al., “The Fake vs Real Goods Problem: Microscopy and Machine Learning to the Rescue,” KDD 2017 Applied Data Science Paper, Aug. 13-17, 2017, Halifax, NS, Canada, 9 pages.
Shi et al., “Smart Cameras: Fundamentals and Classification,” Chapter 2, Belbachir (ed.), Smart Cameras, Springer, New York, New York, USA 2010, pp. 19-34.
Shields, “How To Shop Savvy With Red Laser,” published online on Mar. 22, 2010; https ://i phone .appstomn .net/reviews/lifesty le/how-to-shop-savvy-with-redlaser /, downloaded Mar. 22, 2010, 8 pages).
Smith, “Fireball: A Forensic Ballistic Imaging System: Proceedings of the 31st Annual International Carnahan Conference on Security Technology,” Canberra, Australia, Oct. 15-17, 1997, pp. 64-70.
Takahashi et al., “Mass-produced Parts Traceability System Based on Automated Scanning of Fingerprint of Things,” 15th IAPR International Conference on Machine Vision Applications, Nagoya, Japan, May 8-12, 2017, 5 pages.
United States Postal Service Publication 28 “Postal Addressing Standards”, dated Jul. 2008; text plus Appendix A only; 55 pages.
United States Postal Service, “NCOALink Systems”, http:/ /www.usps.com/ncsc/addressservices/moveupdate/changeaddress.htm, website accessed Jun. 23, 2010, 2 pages.
Veena et al., “Automatic Theft Security System (Smart Surveillance Camera),” Computer Science & Information Technology 3:75-87, 2013.
Woods, “Counterfeit-spotting truth machine launches out of Dumbo,” published online on Feb. 11, 2016, downloaded from http://technically/brooklyn/2016/02/11/entrupy-counterfeit-scanner/ on Mar. 20, 2019, 3 pages.
Farid, Ahmed , et al., “Integrated fingerprint verification method using a composite signature-based watermarking technique”, Optical Engineering, The Catholic University of America, (Year: 2007), 6 pages.
Non-Final Office Action Issued in U.S. Appl. No. 16/866,468, dated Sep. 9, 2021, 24 pages.
European Search Report, dated Feb. 25, 2021, for European Application No. 20202130.9, 9 pages.
Extended European Search Report Application No. 21153877.2, dated Jun. 15, 2021, 8 pages.
Extended European Search Report, dated Aug. 18, 2021, for European Application No. 21164207.9—17 pages.
Extended European Search Report, dated Aug. 18, 2021, for European Application No. 21164207.9, 13 pages.
Extended European Search Report, dated Aug. 19, 2021, for European Application No. 21164353.1, 9 pages.
Extended European Search Report, dated Jun. 18, 2021, for European Application No. 21153355.9, 8 pages.
Non-Final Office Action Issued in U.S. Appl. No. 16/553,943, dated Sep. 1, 2021, 13 pages.
Non-Final Office Action Issued in U.S. Appl. No. 16/827,701, dated Aug. 17, 2021, 19 pages.
Non-Final Office Action Issued in U.S. Appl. No. 16/917,355, dated May 18, 2021, 26 pages.
Hensler, J., et al., “Hybrid Face Recognition Based on Real-time Multi-camera Stereo-Matching”, ICIAP: International Conference on Image Analysis and Processing, 17th International Conference, Naples, Italy, Sep. 9-13, 2013, 10 pages.
Jain, Anil K, et al., “Biometric Cryptosystems: Issues and Challenges”, Proceedings of the IEEE, IEEE, New York, US, vol. 92, No. 6, Jun. 1, 2004, XP011112757, pp. 948-960.
Scott, Von Duhn, et al., “Three-View Surveillance Video Based Face Modeling For Recognition”, Biometrics Symposium, 2007, IEEE, PI, Sep. 30, 2007, 6 pages XP031202430.
Truong, Hieu C, et al., “Royal Canadian Mint/Signoptic Technologies Coin DNA Technology”, World Money Fair (WMF) Berlin Feb. 1-3, 2011, http://www.amisdeleuro.org/upload/1340734488.pptx, 22 pages.
Zaeri, Naser, “Minutiae-based Fingerprint Extraction and Recognition, 2020 (year 2010)”, 47 pages.
Related Publications (1)
Number Date Country
20210103760 A1 Apr 2021 US
Provisional Applications (3)
Number Date Country
61448465 Mar 2011 US
61914722 Dec 2013 US
61898780 Nov 2013 US
Divisions (1)
Number Date Country
Parent 14531724 Nov 2014 US
Child 15208339 US
Continuations (3)
Number Date Country
Parent 15862556 Jan 2018 US
Child 17102115 US
Parent 15208339 Jul 2016 US
Child 15862556 US
Parent 13410753 Mar 2012 US
Child 14290653 US
Continuation in Parts (1)
Number Date Country
Parent 14290653 May 2014 US
Child 14531724 US