Securing composite objects using digital fingerprints

Information

  • Patent Grant
  • 11238146
  • Patent Number
    11,238,146
  • Date Filed
    Thursday, October 17, 2019
    4 years ago
  • Date Issued
    Tuesday, February 1, 2022
    2 years ago
Abstract
A system comprises a combination of digital fingerprint authentication techniques, processes, programs, and hardware to facilitate highly reliable authentication of a wide variety of composite physical objects. “Composite” in this case means that there are distinct regions of the object that must be authenticating individually and in tandem to authenticate the entire object. Preferably, a template is stored that defines for a class of objects what regions must be found, their locations, optionally semantic content of the regions, and other criteria. digital fingerprinting is utilized to locate and attempt to match candidate regions by querying a database of reference object records.
Description
RELATED CASE

None; this is an original application.


COPYRIGHT NOTICE

Copyright © 2019 Alitheon, Inc. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. 37 C.F.R. § 1.71(d) (2017).


TECHNICAL FIELD

This application pertains to methods, systems and software for authenticating composite physical objects using digital fingerprinting and related technologies.


BACKGROUND

Digital fingerprinting has been used to identify and/or authenticate a physical object. However, for many composite objects, simple matching of one or more random locations on an object does not provide reliable authentication. More sophisticated techniques are needed to reliably authenticate complex physical objects.


SUMMARY OF THE PRESENT DISCLOSURE

The following is a summary of the present disclosure to provide a basic understanding of some features and context. This summary is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. Its sole purpose is to present some concepts of the present disclosure in simplified form as a prelude to a more detailed description that is presented later.


A system taught by this disclosure generally comprises a combination of digital fingerprint authentication techniques, processes, programs, and hardware. In an embodiment, a mechanism is provided to “tell” the system what regions of a physical object are important to authentication, what it should find there (i.e. content of the region). In an embodiment, the system may also specify limits on positional variance that, if exceeded, may indicate an altered item.


In an embodiment, a computer-implemented method to authenticate a composite physical object comprises the steps of: selecting a class of objects to which the composite physical object belongs; accessing a stored template provided for authenticating objects of the selected class; identifying all regions of the object specified in the template as required for authentication; scanning at least the identified regions of the physical object to acquire image data for each identified region; processing the acquired image data to extract digital fingerprints of each of the identified regions; based on the digital fingerprints, querying a database of reference objects of the selected class to obtain a matching record; wherein a matching record requires that each and every identified region of the physical object match a corresponding region of the matching record, based on the corresponding digital fingerprints, within a selected tolerance; and determining authenticity of the physical object based on results of the querying step.





BRIEF DESCRIPTION OF THE DRAWINGS

To enable the reader to realize one or more of the above-recited and other advantages and features of the present disclosure, a more particular description follows by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered limiting of its scope, the present disclosure will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a simplified conceptual diagram of one example of a computer-implemented authentication system consistent with the present disclosure.



FIG. 2 is a simplified example of an authentication template data structure.



FIG. 3 is a simplified flow diagram of an example of an authentication process for a composite physical object.



FIG. 4 is a simplified flow diagram of a point of interest matching process useful in matching digital fingerprints of a physical object.



FIG. 5 is a simplified block diagram of another example of a computer-implemented authentication system consistent with the present disclosure.





DETAILED DESCRIPTION OF ONE OR MORE EMBODIMENTS

Reference will now be made in detail to embodiments of the inventive concept, examples of which are illustrated in the accompanying drawings. The accompanying drawings are not necessarily drawn to scale. In the following detailed description, numerous specific details are set forth to enable a thorough understanding of the inventive concept. It should be understood, however, that persons having ordinary skill in the art may practice the inventive concept without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.


It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first machine could be termed a second machine, and, similarly, a second machine could be termed a first machine, without departing from the scope of the inventive concept.


It will be further understood that when an element or layer is referred to as being “on,” “coupled to,” or “connected to” another element or layer, it can be directly on, directly coupled to or directly connected to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly coupled to,” or “directly connected to” another element or layer, there are no intervening elements or layers present. Like numbers refer to like elements throughout. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.


The terminology used in the description of the inventive concept herein is for the purposes of describing illustrative embodiments only and is not intended to be limiting of the inventive concept. As used in the description of the inventive concept and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed objects. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


“Composite” means that there are regions of the object such that authenticating them is necessary or at least contributory to authenticating the entire object. The present disclosure applies to all physical objects that are assemblages or composites and where at least some of the individual components must be authenticated for the object itself to authenticate. Put another way, this disclosure enables reliable authentication of virtually any object where a single region of authentication is inadequate.


“Region” means a portion of the physical object. It may be a component (such as a chip on a printed circuit board), it may be a specific region on a document (e.g. the name field on a passport), or it may be just a portion of the object with no particular content (a piece of the blank paper on a Federal Reserve note). Where “region” is use, it is to be understood in all or any of these contexts.


Various forms of the words “authenticate” and “authentication” are used broadly to describe both authentication and attempts to authenticate which comprise creating a digital fingerprint of the object. Therefore, “authentication” is not limited to specifically describing successful matching of inducted objects or generally describing the outcome of attempted authentications. As one example, a counterfeit object may be described as “authenticated” even if the “authentication” fails to return a matching result. In another example, in cases where unknown objects are “authenticated” without resulting in a match and the authentication attempt is entered into a database for subsequent reference the action described as “authentication” or “attempted authentication” may also, post facto, also be properly described as an “induction”. An authentication of an object may refer to the induction or authentication of an entire object or of a portion of an object.


Digital fingerprinting and scanning are described later.


Object Authentication


This disclosure teaches determining the authenticity (or lack thereof) of physical objects where multiple regions of the object must be authentic for the object (as a whole) to be considered authentic. “Regions” may be physical components like a chip on a printed circuit board or may be regions on a document (such as the photograph in a passport). They can even be somewhat more abstract (such as the semantic content of a will being a “part” of the will). Authentication of the whole thus comprises ensuring that sufficient of certain designated regions of the object are authentic.


In an embodiment, authentication may comprise the steps of:

    • Determining the regions designated as necessary for authentication.
    • Locating the necessary regions, extracting digital fingerprints of those regions, extracting content, positional, and/or other information from the regions.
    • Determining whether the necessary relationships (physical, logical, or other) among the regions of the object are found.


In one preferred embodiment, some or all of these authentication requirements may be stored in a “template” which may be implemented as one or more records in a database. A computer or digital processor is used to automate the process. Indeed, manual authentication is impossible due to the complexity and volume of data to be processed. For example, there may be 20,000 unique “points of interest” included in a digital fingerprint of an object or even a single region of an object. All of the points of interest may be considered in a matching process. An example of a matching process is described below with regard to FIG. 4.



FIG. 1 is especially compelling if Vala is paying rapt attention she can delete this sentence. FIG. 1 is a simplified block diagram of one example of a computer-implemented authentication system consistent with the present disclosure. In the figure, a composite physical object 100 has regions including regions 102. The object may be scanned by a scanner 104 to capture image data, and the image data processed, block 106, to form digital fingerprints of each region. (Scanning and digital fingerprinting are described in more detail later.) This information may be input to an authentication server 110 via a network or other communication link 112. In some embodiments, the authentication server 110 may incorporate hardware and software to implement a user interface 140, a query manager 142, a communications module 150 and various other workflows 152. In an embodiment, the server may host data analysis software 144.


Referring again to FIG. 1, a datastore 116 may be coupled to the authentication server for communications with the query manager 142 for finding, reading and writing data to the datastore 116. A computer, terminal or similar apparatus 130 may enable a user to communicate with the authentication server (for example, via user interface 140) to manage or request authentication operations, store and modify data such as templates, and receive authentication result messages. The datastore 116 preferably stores, inter alia, reference object records 170 and authentication templates 160. Reference objects are physical objects which have been previously “inducted” into the datastore, meaning that corresponding records are stored therein, the reference record for an object comprising various data including digital fingerprints for selected regions of the reference object. Other information associated with a region is described below.


In particular view of the present description are objects where a substitution of a counterfeit region or component for a good one should make the object fail authentication. “Region” here may be, for example, “what is written in a particular location,” “a bit of the background”, or “a physical component”, or any number of other things.


The present disclosure teaches, therefore, a process that generally includes selecting multiple regions, components, or regions of an object, defining what would make each of them authentic, defining which of them must be authentic for the object to be considered authentic, defining the physical and/or content-based relationships of the different regions, digitally fingerprinting the relevant regions, determining (if required) their contents, determining (if required) their positions (physical or logical), comparing all this with the references, and determining whether the object is authentic.


The information about a region may contain its digital fingerprint but may also contain information about the region's content or what is written on it, its physical or logical relationship with the object as a whole or with other regions (this region must be exactly 1.3″ left of and 0.8″ below this other region or this region must be a component of this other region), as well as any relevant metadata. I use the term “region” broadly to mean any division of the object, not just a physically-separable component.


Complex Object Authentication


Relatively complex objects, however, have additional steps involved in authentication. A complex object may have many components, fields, areas, or other divisions that have to agree individually with what was captured at induction. Clearly, to determine whether what is found on the current (or test) object matches its reference in the database, we first must know its purported identity. It is sufficient in many cases to know simply a class to which the test object belongs. This class identity can be used to access a corresponding template to define authentication requirements, and the class helps to limit the scope of querying the reference database. We also have to know what regions must match the reference database for the object to be considered authentic. In an embodiment, these can be identified in the corresponding class template. Authentication then consists of ensuring that the individual regions satisfactorily match the reference for the object. This discussion explains the use of digital fingerprints, positional, and content information to perform that matching.


In general, for objects authenticatable through the teachings of this disclosure, authenticating a random region is not enough to guarantee the object is authentic. This applies, for example, to objects that may be modified illicitly but most of the object left alone. Authentication requires more, under these circumstances, than merely saying there are lots of good digital fingerprint matches. The “right” regions/components/pieces also have to authenticate. The number of such regions may be anything from 1 up Vala rephrase.


In some use cases, there may be regions on the object that are not important for authentication purposes. Thus, a label applied somewhere in the manufacturing process that has nothing to do with certifying the object as authentic may be ignored. (On the other hand, a label may be “read” as a convenient way to automatically determine a class of an object) “Templates” is used broadly to mean any method of selecting (manually or automatically) a region to be authenticated, one not to be, or both. In other words, templates can be positive (this region must authenticate) or negative (don't bother to look here for authentication purposes).


The substrate (for a document, what the text or images are printed on), the printing material, the location, and the contents of what is printed may all need to be established as authentic for the item to be authentic. Varying combinations of these are all in view in this patent.


Most of the documents and financial instruments relevant here either have some kind of direct value (e.g. a Federal Reserve Note) or may grant access to something of value (a passport for entering the country, a will) if it is authentic. The same principles also apply to things like printed circuit boards. In some cases, it may be essential that the areas being authenticated either be so widely spread that counterfeiting is infeasible or, more securely, the authenticated regions are kept confidential. For a Federal Reserve note to be authentic, for example, more than half the note must be authenticated.


Authentication security in general is a combination of (1) Authenticating what could be falsified (both regions and contents, in general); and (2) Making sure all the pieces fit together (i.e. all of them are authentic regions of an inducted original).


Authenticating multiple regions may require more than just matching digital fingerprints. It may also be required that the contents of those regions match the original, that their physical arrangement or positioning on the object or among themselves be consistent with the original, or many other things.


In some use cases, concealing what regions are being used for authentication is advisable to better deter fraud and counterfeiting. It would be used, for example, to authenticate a Federal Reserve note by scattering the authentication regions pseudo-randomly across the bill so that it would be essentially impossible to create, say, a note that was a composite of an authentic note and a counterfeit and not have that detected (fail authentication) by the teachings of this patent.



FIG. 2 is a simple example of a template stored in a digital record or file 160. This illustration uses an XML-like syntax for convenience. It is not implied that XML is necessary or even advantageous for this purpose—it is simply used as readily readable by most readers. This example illustrates a template for a single class of objects, “CLASS NAME,” specifying “required regions” (those that must be matched), as well as “ignore regions” that need not be matched. In some regions, necessary location and content information is specified.



FIG. 3 is a simplified flow diagram of an example of an authentication process for a composite physical object. In an embodiment, the process may comprise: accessing a stored template provided for authenticating objects of the selected class, step 302; identifying all regions of the object specified in the template as required for authentication, step 304; scanning at least the identified regions of the physical object to acquire image data for each identified region, step 306; processing the acquired image data to extract digital fingerprints of each of the identified regions, step 308; based on the digital fingerprints, querying a database of reference objects of the selected class to obtain a matching record, step 312; applying the match criteria specified in the template, step 316; determining authenticity of the physical object based on results of the querying step, step 320; and transmitting an indication the result to a user interface, step 322.


EXAMPLE EMBODIMENTS

This section describes several embodiments of the invention. They are descriptive only and not meant to limit the teachings of this patent but merely to show ways in which it could be used.


Currency. In one embodiment a $100 bill that has been cut in half and each half attached to a counterfeit (half) $100 bill is not authentic. Current techniques, including cash handling machines and counterfeit-detecting pens, are fooled by such constructs—both $100 bills will show as authentic. Were we to apply single-region digital fingerprinting to those bills, they also would authenticate (since half the bill is more than enough for a good authentication? In this case, it should not be authenticated, and the teachings of this patent apply. Even though very large regions of the bill would show as genuine, the desired result is that the overall bill should fail. This patent ensures it does by requiring authentication of multiple regions of the bill.


The size of these regions depends on many things, including how small the pieces of a counterfeit bill may be. The number and location of those regions should be sufficiently widely distributed that it is infeasible to do a “cut and paste” and get a counterfeit worth producing. These regions can be chosen manually (perhaps for all bills) or programmatically. Where they are located and their sizes can be chosen for uniform distribution, randomly, through the use of a Latin Hypercube approach, or many other means understood in the art. If they are different for each bill, the template for finding them at authentication can be created or stored in many ways, including indexing it in a database linked to the bill's serial number. That serial number can be acquired in different ways. It can, for example, be read off the bill using optical character recognition or entered manually. Alternatively, the entire bill can be sent to authentication and whatever bill it best matches, that bill becomes the purported identity of the bill and its serial number used. For greater security, the regions used for authentication may be kept confidential. The randomly-chosen approach mentioned above makes this straightforward.


To authenticate the bill, a large percentage of the area of the bill must fall in templated regions that must individually authenticate. Anything in excess of 50% of the area of the bill showing a digital fingerprint match would ensure that two bills cannot be split as discussed above and still yield two authenticatable $100 bills. If the regions are chosen randomly, uniformly, or in a Latin Hypercube arrangement, far less than half the bill need be attempted to authenticate to discover that half has been replaced.


In this example, the content of the regions is not important. Their positioning is based on the template and the regions to be matched clearly must align with the template or they will not be seen, but for this example there is no need to determine whether such a region has been offset from its correct position by more than an allowed amount.


Passports. A passport is not authentic unless the correct information is printed on or affixed to a government-created passport blank. This information comprises a photograph of the person, two name fields, a passport number, the country, the person's birth date, the place of issue, issue date, expiration date, and others. For the passport to be authentic, the blank (background) must be authentic, virtually all the information fields must be shown to have the correct content, and the photograph must be the original.


As discussed previously, there are two kinds of templates in view here. The first, used on the background (the regions of the passport that are printed prior to putting any person-specific information on the passport. To be authentic, a passport must be printed on a government-produced blank. Significant background areas remain after the passport is completed and can be authenticated using the kind of pseudo-random template discussed under currency above.


The other kind of template in use in this embodiment is more standard. It covers regions that must have authentic content in order for the passport to be authentic. Authenticating “content” in each case in the passport means confirming “what the text says” for the textual regions. The photograph can be authenticating by matching its digital fingerprint with the photograph's analog in the reference set, my doing a direct image match, or by other methods known in the art. In addition, it is probably desirable that the textual regions' digital fingerprints, as well as their textual contents, match on the text regions.


Further confidence in authenticity is obtainable—and in view in this patent—if the templated fields are in their correct locations (up to a small error) both on the background and with respect to each other.


Wills. In one embodiment a will needs to be authenticated. To be authentic a will must be an unaltered original signed by the person. To determine that it is the one originally signed by the person, the signature and the paper on which the will is printed must be authenticated. The signature can be authenticated using digital fingerprinting (with a known region location) or by other means known in the art. The paper can be authenticated as with the currency and passport examples. With a multi-page will, each page must be authentic and all pages in the original must be present with no additions.


Wills differ somewhat from previous examples in that all the content must be unchanged. A will signed by the correct person, and on the original paper, but where the contents have been altered, is not an authentic will. Ensuring the content as authentic can be as straightforward as reading the content at authentication and comparing with the original stored in the reference file. It can also be done by requiring that all regions of the document have digital fingerprints that match the original. These or any other way of verifying content are in view in this patent.


Printed circuit boards. In one embodiment a printed circuit board needs to be authenticated. Note that the description here applies to the same items (printed circuit boards) as one embodiment of 0682 and with the same intent—finding replaced components. There are several differences here (as mentioned above), however. One of the most important is that under this patent the system is told which regions are most important while in that one we look for regions on the circuit board that are deficient in matches. The two approaches can also be combined to authenticate a printed circuit board.


The templating required here is just like the templating in the passport for the fields of known location (because the components are supposed to be in a known location). The contents of those regions are their digital fingerprints.


“Identification” and “Authentication”


The current patent deals primarily with authentication (though identifying the object or using some method such as a serial number to identify what object this purports to be is certainly a region of this patent, since you cannot authenticate an object if you don't know what it is supposed to be.


“Identification” means determining what particular object is before us. We must be careful here to distinguish purported identification and real identification. Purported identification is determining which particular object the object before us claims to be. The serial number on a Federal Reserve note or on a passport tell us what particular note or passport the one before us claims to be. But it doesn't actually identify the object. Digital fingerprinting of the object before us and successfully matching that digital fingerprint with one in a reference database collected when provenance was assured is necessary for identification.


In a case where there is no possibility or consequence of the object having been modified, matching any reasonable portion of the digital fingerprint of the current object with that of an object in the reference database is sufficient for identification (whether or not there is a serial number. All the serial number does is tell us which one to check against. Without a serial number, we have to check every item of the same kind as the object before us. So “identification” means determining which item is before us, but it may not determine whether the object is completely authentic (only that the region where the matched digital fingerprint came from is authentic). For something like a unitary machined part, for example, identification as described here is sufficient for authentication.


Matching Points of Interest



FIG. 4 shows an example of a process for matching points of interest (“POIS”) of a region. The POIS are acquired in a test object fingerprint, block 402. Next the process calls for searching or querying a database to identify matching reference POIS within a given tolerance, block 406. The process then finds a best-fit geometric transformation from the test object POIS to the identified matching POIS, block 408. Preferably, the transformation includes ΔX, ΔY, rotation and scale. The goal here is to compensate for variations in equipment, setup, lighting, etc. between the original induction of the reference object and the current scanned image data of the unknown or test object. Then the best-fit transformation is applied to the test object digital fingerprints and a match correlation value determined for each of them, block 410. The digital fingerprints that exceed a threshold match correlation value may be called “true matches.” At block 412, the process analyzes the true matches to identify a matching reference record for the test object digital fingerprint.



FIG. 5 is a simplified diagram of another example of a computer-implemented authentication system consistent with the present disclosure. Here, the authentication server 110 and associated datastore may be generally the same as previously described with regard to FIG. 1. In this case, a smart phone 502 having an internal camera is used to capture image data of an object 510. In an embodiment, the smart phone may have software for processing the image data to extract digital fingerprints. In some embodiments, the smart phone may transmit image data over a network 504 (ethernet, internet, for example) to another processor. It may transmit raw image data or processed digital fingerprints to the authentication server 110. In an embodiment, the smart phone has software to request authentication services, and receive authentication results, for example, via the user interface 140 of the authentication server 110 and the user interface of the smartphone. Thus, the authentication processes described above may be conducted at virtually any location. One use case may be where the physical object is difficult or impossible to move, or where it may not be moved due to practical, contractual, or regulatory restrictions.


Digital Fingerprinting


“Digital fingerprinting” refers to the creation and use of digital records (digital fingerprints) derived from properties of a physical object, which digital records are typically stored in a database. Digital fingerprints maybe used to reliably and unambiguously identify or authenticate corresponding physical objects, track them through supply chains, record their provenance and changes over time, and for many other uses and applications including providing secure links between physical and digital objects as described above.


In more detail, digital fingerprints typically include information, preferably in the form of numbers or “feature vectors,” that describes features that appear at particular locations, called points of interest, of a two-dimensional (2-D) or three-dimensional (3-D) object. In the case of a 2-D object, the points of interest are preferably on a surface of the corresponding object; in the 3-D case, the points of interest may be on the surface or in the interior of the object. In some applications, an object “feature template” may be used to define locations or regions of interest for a class of objects. The digital fingerprints may be derived or generated from digital data of the object which may be, for example, image data.


While the data from which digital fingerprints are derived is often images, a digital fingerprint may contain digital representations of any data derived from or associated with the object. For example, digital fingerprint data may be derived from an audio file. That audio file in turn may be associated or linked in a database to an object. Thus, in general, a digital fingerprint may be derived from a first object directly, or it may be derived from a different object (or file) linked to the first object, or a combination of the two (or more) sources. In the audio example, the audio file may be a recording of a person speaking a particular phrase. The digital fingerprint of the audio recording may be stored as part of a digital fingerprint of the person speaking. The digital fingerprint (of the person) may be used as part of a system and method to later identify or authenticate that person, based on their speaking the same phrase, in combination with other sources.


In the context of this description a digital fingerprint is a digital representation of the physical object. It can be captured from features of the surface, the internals, the progression of the object in time, and any other repeatable way that creates a digital fingerprint that can be uniquely and securely assigned to the particular digital object. Though not mentioned herein, secure protection of the physical object, its digital fingerprint, and of the associated digital objects are assumed.


In the context of this document, a digital fingerprint is a natural “digitization” of the object, obtainable unambiguously from the digital object. It is the key to the digital object, providing the link between the physical object and the digital. These digital fingerprints, in order to accomplish the kind of physical-digital linkage desired, must have certain properties. Our approach has these properties, while many other forms of digital fingerprinting do not. Among these properties are:

    • The digital fingerprint must be unambiguously derived from a single individual object.
    • It must remain matchable (to a corresponding data store record) with high confidence even as the individual object ages, wears, or is otherwise changed.


Returning to the 2-D and 3-D object examples mentioned above, feature extraction or feature detection may be used to characterize points of interest. In an embodiment, this may be done in various ways. Two examples include Scale-Invariant Feature Transform (or SIFT) and Speeded Up Robust features (or SURF). Both are described in the literature. For example: “Feature detection and matching are used in image registration, object tracking, object retrieval etc. There are number of approaches used to detect and matching of features as SIFT (Scale Invariant Feature Transform), SURF (Speeded up Robust Feature), FAST, ORB etc. SIFT and SURF are most useful approaches to detect and matching of features because of it is invariant to scale, rotate, translation, illumination, and blur.” MISTRY, Darshana et al., Comparison of Feature Detection and Matching Approaches: SIFT and SURF, GRD Journals—Global Research and Development Journal for Engineering|Volume 2|Issue 4|March 2017.


In some embodiments, digital fingerprint features may be matched, for example, based on finding a minimum threshold distance. Distances can be found using Euclidean distance, Manhattan distance etc. If distances of two points are less than a prescribed minimum threshold distance, those key points may be known as matching pairs. Matching a digital fingerprint may comprise assessing a number of matching pairs, their locations or distance and other characteristics. Many points may be assessed to calculate a likelihood of a match, since, generally, a perfect match will not be found. In some applications an “feature template” may be used to define locations or regions of interest for a class of objects.


In an embodiment, features may be used to represent information derived from a digital image in a machine-readable and useful way. Features may be point, line, edges, and blob of an image etc. There are areas as image registration, object tracking, and object retrieval etc. that require a system or processor to detect and match correct features. Therefore, it may be desirable to find features in ways that are invariant to rotation, scale, translation, illumination, noisy and blur images. The search of interest points from one object image to corresponding images can be very challenging work. The search may preferably be done such that same physical interest points can be found in different views. Once located, points of interest and their respective characteristics may be aggregated to form the digital fingerprint (generally including 2-D or 3-D location parameters).


In the context of this description a digital fingerprint is a digital representation of the physical object. It can be captured from features of the surface, the internals, the progression of the object in time, and any other repeatable way that creates a digital fingerprint that can be uniquely and securely assigned to the particular digital object. Though not mentioned herein, secure protection of the physical object, its digital fingerprint, and of the associated digital objects are assumed.


Put another way, a digital fingerprint is a natural “digitization” of the object, obtainable unambiguously from the digital object. It is the key to the digital object, providing the link between the physical object and the digital. These digital fingerprints, in order to accomplish the kind of physical-digital linkage desired, must have certain properties. Among these properties are:

    • The digital fingerprint must be extracted unambiguously from a single individual object.
    • It must remain matchable with high confidence as the individual object ages, wears, or is otherwise changed.


Scanning


In this application, the term “scan” is used in the broadest sense, referring to any and all means for capturing an image or set of images, which may be in digital form or transformed into digital form. Images may, for example, be two dimensional, three dimensional, or in the form of a video. Thus a “scan” may refer to an image (or digital data that defines an image) captured by a scanner, a camera, a specially adapted sensor or sensor array (such as a CCD array), a microscope, a smartphone camera, a video camera, an x-ray machine, a sonar, an ultrasound machine, a microphone (or other instruments for converting sound waves into electrical energy variations), etc. Broadly, any device that can sense and capture either electromagnetic radiation or mechanical wave that has traveled through an object or reflected off an object or any other means to capture surface or internal structure of an object is a candidate to create a “scan” of an object.


Scanner elements may be discrete or integrated. For example, the scanner may be a camera in a smartphone, and the digital fingerprinting process may be an app on the same smartphone. Alternatively, intermediate data (for example, digital image data) may be transmitted over a network to a remote processor.


Various means to extract “fingerprints” or features from an object may be used; for example, through sound, physical structure, chemical composition, or many others. The remainder of this application will use terms like “image” but when doing so, the broader uses of this technology should be implied. In other words, alternative means to extract “fingerprints” or features from an object should be considered equivalents within the scope of this disclosure. Similarly, terms such as “scanner” and “scanning equipment” herein may be used in a broad sense to refer to any equipment capable of carrying out “scans” as defined above, or to equipment that carries out “scans” as defined above as part of their function. Attestable trusted scanners should be used to provide images for digital fingerprint creation. Scanner may be a single device or a multitude of devices and scanners working to enforce policy and procedures.


More information about digital fingerprinting is set forth below and can be found in various patents and publications assigned to Alitheon, Inc. including, for example, the following: DIGITAL FINGERPRINTING, U.S. Pat. No. 8,6109,762; OBJECT IDENTIFICATION AND INVENTORY MANAGEMENT, U.S. Pat. No. 9,152,862; DIGITAL FINGERPRINTING OBJECT AUTHENTICATION AND ANTI-COUNTERFEITING SYSTEM, U.S. Pat. No. 9,443,298; PERSONAL HISTORY IN TRACK AND TRACE SYSTEM, U.S. Pat. No. 10,037,537; PRESERVING AUTHENTICATION UNDER ITEM CHANGE, U.S. Pat. App. Pub. No. 2017-0243230 A1. Each of these patents and publications is hereby incorporated by this reference.


One of skill in the art will recognize that the concepts taught herein can be tailored to a particular application in many other ways. In particular, those skilled in the art will recognize that the illustrated examples are but one of many alternative implementations that will become apparent upon reading this disclosure. It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. The scope of the present invention should, therefore, be determined only by the following claims.

Claims
  • 1. A computer-implemented method to authenticate a composite physical object, the method comprising: selecting a class of objects to which the composite physical object belongs;accessing a stored template provided for authenticating objects of the selected class of objects;identifying the regions of the object specified in the template as required for authentication;scanning at least the identified regions of the composite physical object to acquire image data for each identified region;processing the acquired image data to extract digital fingerprints of each of the identified regions, wherein each digital fingerprint is based solely on one or more native features of the composite physical object and not based on any identifier, label, or other proxy added to the composite physical object for identification or authentication;based on the digital fingerprints, querying a database of reference objects of the selected class of objects to obtain a matching record, wherein the matching record requires that each and every identified region of the composite physical object match a corresponding region of a reference object in the matching record, based on the corresponding digital fingerprints, within a selected tolerance; anddetermining authenticity of the composite physical object based on the matching record, wherein:the selected class of composite physical objects is electronic apparatus;the corresponding template includes a list of components required to be mounted in the electronic apparatus as a condition of authentication regardless of their locations; andmatching requires matching each of the required components to corresponding components in a reference database based on their digital fingerprints.
  • 2. The computer-implemented method of claim 1, wherein the template specifies a content for at least one of the identified regions of the composite physical object, as a requirement for authentication of the composite physical object; and the method further comprising: extracting content from the corresponding acquired image data of each of the at least one region of the physical object that requires specified content for authentication;for each of the said regions, comparing the extracted content to the corresponding specified content to generate results; andanalyzing the results of all of the comparisons to the determination of authenticity of the composite physical object.
  • 3. The computer-implemented method of claim 2 wherein: the template further specifies, for at least one of the regions having specified content, at least one criterion for analyzing the comparison of the extracted content to the corresponding specified content; andthe analyzing step is arranged to implement the at least one criterion specified in the template.
  • 4. The computer-implemented method of claim 2 wherein the template includes limits on positional variance and the method includes applying the limits in the step of comparing the determined location(s) to the corresponding specified location(s).
  • 5. The computer-implemented method of claim 2 further comprising analyzing content of a region by: comparing semantic meaning of the content of the region to corresponding semantic meaning provided in the template; andcomparing digital fingerprints of the region to digital fingerprints of a corresponding region of a reference database.
  • 6. The computer-implemented method of claim 1 wherein: at least some of the regions of the object specified in the template as required for authentication correspond to components mounted in the electronic apparatus.
  • 7. The computer-implemented method of claim 6, wherein the electronic apparatus is a printed circuit board.
  • 8. The computer-implemented method of claim 1 wherein the template specifies a corresponding location of at least one of the identified regions of the physical object; and the method further comprises: determining a location of the at least one of the identified regions based on the template;comparing the determined location(s) on the object to the corresponding specified location(s) from the template; andconditioning the determination of authenticity based on comparing the determined location(s) on the object.
  • 9. The computer-implemented method of claim 8 wherein the location is specified in the template relative to a second one of the identified regions.
  • 10. The computer-implemented method of claim 8 wherein the location is specified in the template relative to the physical object as a whole.
  • 11. The computer-implemented method of claim 8 wherein the location is specified as X, Y (or X, Y, Z or X, Y, Z, T) coordinates and extent of a representative portion of the physical object.
  • 12. The computer-implemented method of claim 1 wherein the identified regions are specified in the template by respective locations and extents.
  • 13. The computer-implemented method of claim 1 wherein the identified regions are specified in the template by respective locations, extents, and contents.
US Referenced Citations (277)
Number Name Date Kind
4218674 Broscow et al. Aug 1980 A
4423415 Goldman Dec 1983 A
4677435 Causse D'Agraives et al. Jun 1987 A
4700400 Ross Oct 1987 A
4921107 Hofer May 1990 A
5031223 Rosenbaum et al. Jul 1991 A
5079714 Manduley et al. Jan 1992 A
5393939 Nasuta, Jr. et al. Feb 1995 A
5422821 Allen et al. Jun 1995 A
5514863 Williams May 1996 A
5518122 Tilles et al. May 1996 A
5521984 Denenberg et al. May 1996 A
5703783 Allen et al. Dec 1997 A
5719939 Tel Feb 1998 A
5734568 Borgendale et al. Mar 1998 A
5745590 Pollard Apr 1998 A
5883971 Bolle et al. Mar 1999 A
5923848 Goodhand et al. Jul 1999 A
5974150 Kaish et al. Oct 1999 A
6205261 Goldberg Mar 2001 B1
6246794 Kagehiro et al. Jun 2001 B1
6292709 Uhl et al. Sep 2001 B1
6327373 Yura Dec 2001 B1
6343327 Daniels, Jr. et al. Jan 2002 B2
6360001 Berger et al. Mar 2002 B1
6370259 Hobson et al. Apr 2002 B1
6400805 Brown et al. Jun 2002 B1
6424728 Ammar Jul 2002 B1
6434601 Rollins Aug 2002 B1
6470091 Koga et al. Oct 2002 B2
6539098 Baker et al. Mar 2003 B1
6549892 Sansone Apr 2003 B1
6597809 Ross et al. Jul 2003 B1
6643648 Ross et al. Nov 2003 B1
6697500 Woolston et al. Feb 2004 B2
6741724 Bruce et al. May 2004 B1
6768810 Emanuelsson et al. Jul 2004 B2
6778703 Zlotnick Aug 2004 B1
6805926 Cote et al. Oct 2004 B2
6816602 Coffelt et al. Nov 2004 B2
6829369 Poulin et al. Dec 2004 B2
6961466 Imagawa et al. Nov 2005 B2
6985926 Ferlauto et al. Jan 2006 B1
7016532 Boncyk et al. Mar 2006 B2
7031519 Elmenhurst Apr 2006 B2
7096152 Ong Aug 2006 B1
7120302 Billester Oct 2006 B1
7121458 Avant et al. Oct 2006 B2
7152047 Nagel Dec 2006 B1
7171049 Snapp Jan 2007 B2
7204415 Payne et al. Apr 2007 B2
7212949 Bachrach May 2007 B2
7333987 Ross et al. Feb 2008 B2
7343623 Ross Mar 2008 B2
7356162 Caillon Apr 2008 B2
7379603 Ross et al. May 2008 B2
7436979 Bruce Oct 2008 B2
7477780 Boncyk et al. Jan 2009 B2
7518080 Amato Apr 2009 B2
7602938 Proloski Oct 2009 B2
7674995 Desprez et al. Mar 2010 B2
7676433 Ross et al. Mar 2010 B1
7680306 Boutant Mar 2010 B2
7720256 Desprez et al. May 2010 B2
7726457 Maier et al. Jun 2010 B2
7726548 DeLaVergne Jun 2010 B2
7748029 Ross Jun 2010 B2
7822263 Prokoski Oct 2010 B1
7834289 Orbke Nov 2010 B2
7853792 Cowburn Dec 2010 B2
8022832 Vogt et al. Sep 2011 B2
8032927 Ross Oct 2011 B2
8108309 Tan Jan 2012 B2
8180174 Di Venuto May 2012 B2
8180667 Baluja et al. May 2012 B1
8194938 Wechsler et al. Jun 2012 B2
8316418 Ross Nov 2012 B2
8374399 Talwerdi Feb 2013 B1
8374920 Hedges et al. Feb 2013 B2
8391583 Mennie et al. Mar 2013 B1
8428772 Miette Apr 2013 B2
8437530 Mennie et al. May 2013 B1
8457354 Kolar et al. Jun 2013 B1
8477992 Paul et al. Jul 2013 B2
8520888 Spitzig Aug 2013 B2
8526743 Campbell et al. Sep 2013 B1
8774455 Elmenhurst et al. Jul 2014 B2
8959029 Jones Feb 2015 B2
9031329 Farid et al. May 2015 B1
9058543 Campbell Jun 2015 B2
9152862 Ross Oct 2015 B2
9170654 Boncyk et al. Oct 2015 B2
9224196 Duerksen et al. Dec 2015 B2
9234843 Sopori et al. Jan 2016 B2
9245133 Durst et al. Jan 2016 B1
9350552 Elmenhurst et al. May 2016 B2
9350714 Freeman et al. May 2016 B2
9361507 Hoyos et al. Jun 2016 B1
9361596 Ross et al. Jun 2016 B2
9443298 Ross et al. Sep 2016 B2
9558463 Ross et al. Jan 2017 B2
9582714 Ross et al. Feb 2017 B2
9646206 Ross et al. May 2017 B2
9665800 Kuffner May 2017 B1
10037537 Withrow et al. Jul 2018 B2
10043073 Ross et al. Aug 2018 B2
10192140 Ross et al. Jan 2019 B2
10199886 Li et al. Feb 2019 B2
10346852 Ross et al. Jul 2019 B2
10505726 Andon et al. Dec 2019 B1
10540664 Ross et al. Jan 2020 B2
10572883 Ross et al. Feb 2020 B2
10621594 Land et al. Apr 2020 B2
20010010334 Park et al. Aug 2001 A1
20010054031 Lee et al. Dec 2001 A1
20020015515 Lichtermann et al. Feb 2002 A1
20020073049 Dutta Jun 2002 A1
20020168090 Bruce et al. Nov 2002 A1
20030015395 Hallowell et al. Jan 2003 A1
20030046103 Amato et al. Mar 2003 A1
20030091724 Mizoguchi May 2003 A1
20030120677 Vernon Jun 2003 A1
20030138128 Rhoads Jul 2003 A1
20030179931 Sun Sep 2003 A1
20030182018 Snapp Sep 2003 A1
20030208298 Edmonds Nov 2003 A1
20030219145 Smith Nov 2003 A1
20040027630 Lizotte Feb 2004 A1
20040101174 Sato et al. May 2004 A1
20040112962 Farrall et al. Jun 2004 A1
20040218791 Jiang et al. Nov 2004 A1
20040218801 Houle et al. Nov 2004 A1
20050007776 Monk et al. Jan 2005 A1
20050038756 Nagel Feb 2005 A1
20050065719 Khan et al. Mar 2005 A1
20050086256 Owens et al. Apr 2005 A1
20050111618 Sommer et al. May 2005 A1
20050119786 Kadaba Jun 2005 A1
20050125360 Tidwell et al. Jun 2005 A1
20050131576 De Leo et al. Jun 2005 A1
20050137882 Cameron et al. Jun 2005 A1
20050160271 Brundage et al. Jul 2005 A9
20050169529 Owechko et al. Aug 2005 A1
20050188213 Xu Aug 2005 A1
20050204144 Mizutani Sep 2005 A1
20050251285 Boyce et al. Nov 2005 A1
20050257064 Boutant et al. Nov 2005 A1
20050289061 Kulakowski et al. Dec 2005 A1
20060010503 Inoue et al. Jan 2006 A1
20060083414 Neumann et al. Apr 2006 A1
20060109520 Gossaye et al. May 2006 A1
20060131518 Ross et al. Jun 2006 A1
20060177104 Prokoski Aug 2006 A1
20060253406 Caillon Nov 2006 A1
20070071291 Yumoto et al. Mar 2007 A1
20070085710 Bousquet et al. Apr 2007 A1
20070094155 Dearing Apr 2007 A1
20070211651 Ahmed et al. Sep 2007 A1
20070211964 Agam et al. Sep 2007 A1
20070230656 Lowes et al. Oct 2007 A1
20070263267 Ditt Nov 2007 A1
20070269043 Launay et al. Nov 2007 A1
20070282900 Owens et al. Dec 2007 A1
20080008377 Andel et al. Jan 2008 A1
20080011841 Self et al. Jan 2008 A1
20080128496 Bertranou et al. Jun 2008 A1
20080130947 Ross et al. Jun 2008 A1
20080219503 Di Venuto et al. Sep 2008 A1
20080250483 Lee Oct 2008 A1
20080255758 Graham et al. Oct 2008 A1
20080272585 Conard et al. Nov 2008 A1
20080290005 Bennett et al. Nov 2008 A1
20080294474 Furka Nov 2008 A1
20090028379 Belanger et al. Jan 2009 A1
20090057207 Orbke et al. Mar 2009 A1
20090106042 Maytal et al. Apr 2009 A1
20090134222 Ikeda May 2009 A1
20090154778 Lei et al. Jun 2009 A1
20090157733 Kim et al. Jun 2009 A1
20090223099 Versteeg Sep 2009 A1
20090232361 Miller Sep 2009 A1
20090245652 Bastos dos Santos Oct 2009 A1
20090271029 Doutre Oct 2009 A1
20090287498 Choi Nov 2009 A2
20090307005 O'Martin et al. Dec 2009 A1
20100027834 Spitzig et al. Feb 2010 A1
20100070527 Chen Mar 2010 A1
20100104200 Baras et al. Apr 2010 A1
20100157064 Cheng et al. Jun 2010 A1
20100163612 Caillon Jul 2010 A1
20100166303 Rahimi Jul 2010 A1
20100174406 Miette et al. Jul 2010 A1
20100286815 Zimmermann Nov 2010 A1
20110026831 Perronnin et al. Feb 2011 A1
20110064279 Uno Mar 2011 A1
20110081043 Sabol et al. Apr 2011 A1
20110091068 Stuck et al. Apr 2011 A1
20110161117 Busque et al. Jun 2011 A1
20110188709 Gupta et al. Aug 2011 A1
20110194780 Li et al. Aug 2011 A1
20110235920 Iwamoto et al. Sep 2011 A1
20110267192 Goldman et al. Nov 2011 A1
20120042171 White et al. Feb 2012 A1
20120089639 Wang Apr 2012 A1
20120130868 Loken May 2012 A1
20120177281 Frew Jul 2012 A1
20120185393 Atsmon et al. Jul 2012 A1
20120199651 Glazer Aug 2012 A1
20120242481 Gernandt et al. Sep 2012 A1
20120243797 Dayer et al. Sep 2012 A1
20120250945 Peng et al. Oct 2012 A1
20130212027 Sharma et al. Aug 2013 A1
20130214164 Zhang et al. Aug 2013 A1
20130273968 Rhoads et al. Oct 2013 A1
20130277425 Sharma et al. Oct 2013 A1
20130284803 Wood et al. Oct 2013 A1
20140032322 Schwieger et al. Jan 2014 A1
20140140570 Ross et al. May 2014 A1
20140140571 Elmenhurst et al. May 2014 A1
20140201094 Herrington et al. Jul 2014 A1
20140184843 Campbell et al. Sep 2014 A1
20140270341 Elmenhurst et al. Sep 2014 A1
20140314283 Harding Oct 2014 A1
20140380446 Niu et al. Dec 2014 A1
20150058142 Lenahan et al. Feb 2015 A1
20150067346 Ross et al. Mar 2015 A1
20150078629 Gottemukkula et al. Mar 2015 A1
20150086068 Mulhearn et al. Mar 2015 A1
20150117701 Ross Apr 2015 A1
20150127430 Hammer, III May 2015 A1
20150248587 Oami et al. Sep 2015 A1
20150294189 Benhimane et al. Oct 2015 A1
20150309502 Breitgand et al. Oct 2015 A1
20150371087 Ross et al. Dec 2015 A1
20160034914 Gonen et al. Feb 2016 A1
20160055651 Oami Feb 2016 A1
20160057138 Hoyos et al. Feb 2016 A1
20160117631 McCloskey et al. Apr 2016 A1
20160162734 Ross et al. Jun 2016 A1
20160180546 Kim et al. Jun 2016 A1
20160189510 Hutz Jun 2016 A1
20160203387 Lee et al. Jul 2016 A1
20160335520 Ross et al. Nov 2016 A1
20170004444 Krasko et al. Jan 2017 A1
20170032285 Sharma et al. Feb 2017 A1
20170132458 Short et al. May 2017 A1
20170243230 Ross et al. Aug 2017 A1
20170243231 Withrow et al. Aug 2017 A1
20170243232 Ross et al. Aug 2017 A1
20170243233 Land et al. Aug 2017 A1
20170253069 Kerkar et al. Sep 2017 A1
20170295301 Liu et al. Oct 2017 A1
20170300905 Withrow et al. Oct 2017 A1
20170344823 Withrow Nov 2017 A1
20170372327 Withrow Dec 2017 A1
20180012008 Withrow et al. Jan 2018 A1
20180018627 Ross et al. Jan 2018 A1
20180018838 Fankhauser et al. Jan 2018 A1
20180024074 Ranieri et al. Jan 2018 A1
20180024178 House et al. Jan 2018 A1
20180047128 Ross et al. Feb 2018 A1
20180053312 Ross Feb 2018 A1
20180121643 Talwerdi et al. May 2018 A1
20180144211 Ross et al. May 2018 A1
20180315058 Withrow et al. Nov 2018 A1
20180349694 Ross et al. Dec 2018 A1
20190000265 Leizerson Jan 2019 A1
20190034694 Ross Jan 2019 A1
20190102873 Wang et al. Apr 2019 A1
20190228174 Withrow et al. Jul 2019 A1
20190287118 Ross et al. Sep 2019 A1
20190342102 Hao et al. Nov 2019 A1
20200153822 Land et al. May 2020 A1
20200226366 Withrow et al. Jul 2020 A1
20200233901 Crowley et al. Jul 2020 A1
20200250395 Ross et al. Aug 2020 A1
20200257791 Ross et al. Aug 2020 A1
Foreign Referenced Citations (37)
Number Date Country
102006005927 Aug 2007 DE
0439669 Aug 1991 EP
0759596 Feb 1997 EP
1016548 Jul 2000 EP
1719070 Apr 2009 EP
2107506 Oct 2009 EP
2166493 Mar 2010 EP
2195621 Nov 2013 EP
2866193 Apr 2015 EP
2257909 May 2015 EP
2869240 May 2015 EP
2869241 May 2015 EP
3208744 Aug 2017 EP
3249581 Nov 2017 EP
3270342 Jan 2018 EP
3435287 Jan 2019 EP
2097979 Nov 1982 GB
2482127 Jan 2012 GB
S61234481 Oct 1986 JP
2007213148 Aug 2007 JP
20120009654 Feb 2012 KR
WO2005086616 Sep 2005 WO
WO2006038114 Apr 2006 WO
WO2007028799 Mar 2007 WO
WO2007031176 Mar 2007 WO
WO2007071788 Jun 2007 WO
WO2007090437 Aug 2007 WO
WO2007144598 Dec 2007 WO
WO2009030853 Mar 2009 WO
WO2009089126 Jul 2009 WO
WO2009115611 Sep 2009 WO
WO2010018464 Feb 2010 WO
WO2012145842 Nov 2012 WO
WO2013126221 Aug 2013 WO
WO2013173408 Nov 2013 WO
WO2015004434 Jan 2015 WO
WO2016081831 May 2016 WO
Non-Patent Literature Citations (36)
Entry
Cavoukian et al.; “Biometric Encryption: Technology for Strong Authentication, Security and Privacy, Office of the information and Privacy Commissioner, Toronto, Ontario, Canada,” 2008, in WE, International Federation lot Information Processing, vol. 261; Policies and Research in Identity Management; Eds. E. de Leeuw. Fischer-Hiibner, S. Tseng, J., Barking, J.: (Boston: Springer), pp. 57-77 (21 pages).
Farid, “Digital Image Forensics,” Dartmouth CS 89/189, Sprint 2013, 199 pages.
Huang et al., “A Novel Binarization Algorithm for Ballistic Imaging Systems,” 3rd International Congress on Image and Signal Processing, Yantai, China, Oct. 16-18, 2010, pp. 1287-1291.
Huang et al., “An Online Ballistics Imaging System for Firearm Identification,” 2nd International Conference on Signal Processing Systems, Dalian, China, Jul. 5-7, 2010, vol. 2, pp. 68-72.
Li, “Firearm Identification System Based on Ballistics Image Processing,” Congress on Image and Signal Processing, School of Computer and Information Science, Faculty of Computing, Health and Science Edith Cowan University, Mount Lawley, WA, Perth, Australia pp. 149-154.
Online NCOALINK® Processing Acknowledgement Form (PAF) Released by Lorton Data, Jun. 2, 2009, URL=http://us.generation-nt.com/online-ncoalink-processing-acknowledgement-form-paf-released-by-press-1567191.html, download date Jun. 25, 2010, 2 pages.
Smith, “Fireball: A Forensic Ballistic Imaging System: Proceedings of the 31st Annual International Carnahan Conference on Security Technology,” Canberra, Australia, Oct. 15-17, 1997, pp. 64-70.
United States Postal Service, “NCOALink® Systems,” URL=https://web.archive.org/web/20100724142456/http://www.usps.com/ncsc/addressservices/moveupdate/changeaddress.htm, download date Jun. 23, 2010, 2 pages.
United States Postal Service Publication 28 “Postal Addressing Standards”, dated Jul. 2008; text plus Appendix A only; 55 pages.
Boa et al., “Local Feature based Multiple Object Instance Identification using Scale and Rotation Invariant Implicit Shape Model,” 12th Asian Conference on Computer Vision, Singapore, Nov. 1-5, 2014, pp. 600-614.
Beekhof et al., “Secure Surface Identification Codes,” Proceeding of the SPIE 6819: Security Forensics, Steganography, and Watermarking of Multimedia Contents X:68190D, 2008. (12 pages).
Buchanan et al., “Fingerprinting documents and packaging,” Nature 436 (7050): 475, 2005.
Di Paola et al., “An Autonomous Mobile Robotic System for Surveillance of Indoor Environments,” International Journal of Advanced Robotic Systems 7(1): 19-26, 2010.
Fischler et al., “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communication of the ACM 24(6); 381-395, 1981.
Kartik et al., “Security System with Face Recognition, SMS Alert and Embedded Network Video Monitoring Terminal,” International Journal of Security, Privacy and Trust Management 2(5):9-19, 2013.
Li, “Image Processing for the Positive Identification of Forensic Ballistics Specimens,” Proceedings of the 6th International Conference of Information Fusion, Cairns, Australia, Jul. 8-11, 2003, pp. 1494-1498.
Maddern et al., “Illumination Invariant Imaging: Applications in Robust Vision-based Localization, Mapping and Classification for Autonomous Vehicles,” IEEE International Conference on Robotics and Automation, Hong Kong, China, May 31-Jun. 7, 2014, 2014, 8 pages.
Matsumoto et al., “Nano-artifact metrics based on random collapse of resist,” Scientific Reports 4:6142, 2014 (5 pages).
Rublee et al., “ORB: an efficient alternative to SIFT or SURF,” IEEE International Conference on Computer Vision, Barcelona, Spain, Nov. 6-13, 2011, 8 pages.
Schneider et al., “A Robust Content Based Digital Signature for Image Authentication,” Proceeding of the International Conference on Image Processing Lausanne, Switzerland, Sep. 19, 1996, pp. 227-230.
Shi et al., “Smart Cameras: Fundamentals and Classification,” Chapter 2, Belbachir (ed.), Smart Cameras, Springer, New York, New York, USA 2010, pp. 19-34.
Takahashi et al., “Mass-produced Parts Traceability System Based on Automated Scanning of Fingerprint of Things,” 15th IAPR International Conference on Machine Vision Applications, Nagoya, Japan, May 8-12, 2017, 5 pages.
Veena et al., “Automatic Theft Security System (Smart Surveillance Camera),” Computer Science & Information Technology 3:75-87, 2013.
United States Postal Services, NCOALink® Systems, dated May 27, 2009, URL=http://ribbs.usps.gov/ncoalink/ncoalink_print.htm, download date Jun. 23, 2010, 3 pages.
Ebay, “eBay Launches Must-Have IPhone App Red Laser 3.0” published Nov. 18, 2011; https://www.ebayinc.com/stories/news/ ebay-launches-must-have-iphone-app-redlaser-30/, downloaded Mar. 21, 2019, 7 pages).
Shields, “How to Shop Savvy With Red Laser,” published online on Mar. 22, 2010; https://iphone.appstornn.net/reviews/lifestyle/how-to-shop-savvy-with-redlaser/, downloaded Mar. 22, 2010, 8 pages).
Entrupy.com Website History, Wayback Machine; https://web.archive.org/web/20160330060808/https://www.entrupy.com/; Mar. 30, 2016 (Year: 2016), 2 pages.
Anonymous, “Intrinsic Characteristics for Authentication” & “AlpVision Advances Security Through Digital Technology,” Authentication News vol. 12, (No. 9) pp. 2, 7 and 8, dated Sep. 2006, 3 pages total.
Mistry et al., “Comparison of Feature Detection and Matching Approaches: SIFT and SURF,” Global Research and Development Journal for Engineering, vol. 2, Issue 4, Mar. 2017, 8 pages.
Woods, “Counterfeit-spotting truth machine launches out of Dumbo,” published online on Feb. 11, 2016, downloaded from http://technically/brooklyn/2016/02/11/entrupy-counterfeit-scanner/ on Mar. 20, 2019, 3 pages.
Drew, M. S., et al., “Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image,” Color and Imaging Conference, vol. 5, Society for Imaging Science and Technology, 2009, pp. 267-271.
Sharma et al., “The Fake vs Real Goods Problem: Microscopy and Machine Learning to the Rescue,” KDD 2017 Applied Data Science Paper, Aug. 13-17, 2017, Halifax, NS, Canada, 9 pages.
Schwabe Williamson & Wyatt, PC—Listing of Related Cases; dated Sep. 16, 2017; 2 pages.
Jain, Anil K, et al., “Biometric Cryptosystems: Issues and Challenges”, Proceedings of the IEEE, IEEE, New York, US, vol. 92, No. 6, Jun. 1, 2004, XP011112757, pp. 948-960.
Truong, Hieu C, et al., “Royal Canadian Mint/Signoptic Technologies Coin DNA Technology”, World Money Fair (WMF) Berlin Feb. 1-3, 2011, http://www.amisdeleuro.org/upload/1340734488.pptx, 22 pages.
Zaeri, Naser , “Minutiae-based Fingerprint Extraction and Recognition, 2020 (year 2010)”, 47 pages.
Related Publications (1)
Number Date Country
20210117520 A1 Apr 2021 US