Object change detection and measurement using digital fingerprints

Information

  • Patent Grant
  • 11488413
  • Patent Number
    11,488,413
  • Date Filed
    Friday, January 14, 2022
    2 years ago
  • Date Issued
    Tuesday, November 1, 2022
    2 years ago
  • CPC
    • G06V40/1371
    • G06V40/1353
    • G06V40/1329
  • Field of Search
    • CPC
    • G06V40/1371
    • G06V40/1353
    • G06V40/1329
    • G06V10/757
    • G06V20/66
    • G06V20/80
    • G06K9/6255
    • G06K9/6215
  • International Classifications
    • G06V40/12
    • G06V40/13
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
The present disclosure teaches a method of utilizing image “match points” to measure and detect changes in a physical object. In some cases “degradation” or “wear and tear” of the physical object is assessed, while in other applications this disclosure is applicable to measuring intentional changes, such as changes made by additive or subtractive manufacturing processes, which may, for example, involve adding a layer or removing a layer by machining. A system may include a scanner, and a digital fingerprinting process, coupled to an object change computer server. The server is coupled to a datastore that stores class digital fingerprints, selected object digital fingerprints collected over time, match measurements, and deterioration metrics.
Description
COPYRIGHT NOTICE

© Alitheon, Inc. 2019-2020. A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, if and when they are made public, but otherwise reserves all copyright rights whatsoever. 37 CFR § 1.71(d).


FIELD OF THE DISCLOSURE

This application pertains to methods, systems, and software for detecting and measuring changes over time to physical objects, through the use of digital fingerprints.


BACKGROUND OF THE DISCLOSURE

It is often necessary or desirable to determine whether or not a physical object has changed from a prior state and, in certain cases, to determine how much a physical object has changed from a prior state. The need remains to reliably identify objects, detect and measure changes in the object over time, and from those changes assess authenticity, provenance, quality, condition, and/or degradation over time.


SUMMARY OF THE DISCLOSURE

The following is a summary of the present disclosure to provide a basic understanding of some features and context. This summary is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. Its sole purpose is to present some concepts of the present disclosure in simplified form as a prelude to a more detailed description that is presented later.


In one embodiment, a system comprises a combination of digital fingerprint techniques, processes, programs, and hardware to enable a method of utilizing image “match points” to measure and detect changes in a physical object. In some cases “degradation” or “wear and tear” of the physical object is assessed, while in other applications this disclosure is applicable to measuring intentional changes, such as changes made by additive or subtractive manufacturing processes, which may, for example, involve adding a layer or removing a layer by machining. A system may include a scanner, and a digital fingerprinting process, coupled to an object change computer server. The object change computer server may be coupled to a datastore that stores class digital fingerprints, selected object digital fingerprints collected over time, match measurements, and deterioration metrics.


Additional aspects and advantages of the present disclosure will be apparent from the Detailed Description, which proceeds with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

To enable the reader to realize one or more of the above-recited and other advantages and features of the present disclosure, a more particular description follows by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the disclosure and are not therefore to be considered limiting of its scope, the present disclosure will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 is a simplified block diagram of one example of a system to detect and measure changes in physical objects using digital fingerprints.



FIG. 2 is an image of a portion of a dollar bill with digital fingerprinting points of interest superimposed as circles on the image.



FIG. 3 is an image of a portion of a turbine blade with digital fingerprinting points of interest superimposed as circles on the image.



FIG. 4 is a simplified flow diagram illustrating a first process to detect and measure degradation in physical objects using digital fingerprints.



FIG. 5 is a simplified flow diagram of a second process to detect and measure degradation in physical objects using digital fingerprints.





DETAILED DESCRIPTION OF ONE OR MORE EMBODIMENTS

It is often necessary or desirable to determine whether or not a physical object has changed from a prior state and, in certain cases, to determine how much a physical object has changed from a prior state. The present disclosure teaches a method of utilizing image “match points” to measure and detect changes in a physical object. While some aspects of the results presented herein may be achieved by examining changes in images that are not changes in match points, existing image-based approaches are significantly less general (for additive as well as subtractive processes, for example), and thus more limited, than the methods taught in this disclosure.


In this disclosure, frequent reference is made to “degradation” or “wear and tear” but the teachings of this disclosure are equally applicable to measuring intentional changes, such as changes made by additive manufacturing processes, which may, for example, involve adding a layer or removing a layer by machining, and any other processes that result in changes to the state of an object.


One aspect of this disclosure is the use of measured changes in a digital fingerprint of an object as a measure of its actual change from its initial conditions, or any prior condition. The techniques are primarily described in relation to measuring wear and tear and determining that an object should be replaced, but the techniques described here also have applicability in measuring the effects of other changes such as the effectiveness of anodizing a surface or an additive manufacturing process in, for example, adding a layer or a feature to an object under construction—or to any occurrence that results in a change in or to the material substance of a physical object. The methods taught in the present disclosure may also be applied to the grading of objects, such as of a coin, based on a degradation of the object from a standard. One application of the taught method is determining when a currency note has degraded to a such degree that it needs to be recycled.


As will be shown, the measured degradation may be from a digital fingerprint created at an earlier stage of the object itself or from a standard representative of the kind (class) of object being measured, or by other means. This standard may be, for example, a particular object, a physical or digital model, or it may be an aggregate of many objects of the same class. The concept of measured degradation will be further elaborated below.


During the creation and/or use of various physical goods, it is often necessary or desirable to determine that a physical good or object has changed from some prior state—and to determine how much it has changed or in what manner it has changed. This disclosure teaches a method for using digital fingerprints to determine and measure various object changes in a wide range of physical objects and under a range of conditions. Examples will be given where the change is one of wear-and-tear, corrosion, ablation, or other essentially subtractive processes—where the surface elements are gradually removed. Equally important are changes to the surface that occur through additive processes, such as a step in an additive manufacturing process, or a person's addition of makeup to his or her face. The additive conditions, the subtractive conditions, and conditions where both addition and subtraction take place at the same time or in succession (e.g. abrasion, corrosion, and patination) are in view of this disclosure.


Wear and tear generally refers to changes in the surface or near-surface conditions that may be seen through electromagnetic interaction. Note that the use of any part of the electromagnetic spectrum is in view of this disclosure, although most of the examples will involve visible light. No particular method of extracting digital fingerprints is required for this disclosure. However, the methods chosen will need to be ones that extract the digital fingerprints from the surface or near-surface features of the material substance of a physical object. In the present disclosure, the term “surface” will mean surface or near-surface.


Wear and tear as defined in this disclosure consists of progressive changes in the surface characteristics of a physical object. The changes may be of many kinds including abrasion, creasing, soiling, aging, oxidizing, physical and/or chemical weathering, or any other progressive change, including intentional changes such as plating. It is not to be implied that changes to an object are necessarily negative or are indications of damage. Some of the embodiments discussed below involve an object “improving” (e.g. bringing the object closer to completion of the manufacturing process). Since digital fingerprints of the type described are derived from surface characteristics of an object, changes in the surface characteristics may be quantified by measuring changes in digital fingerprints of the surface.


In this disclosure, frequent reference will be made to matching a digital fingerprint created at one point in time with a digital fingerprint created at a different point in time and using the degree of match as a component in the system. The matching may be done in various ways. As an example, an embodiment may carry out the matching of a plurality of digital fingerprints by calculating a simple threshold feature vector distance for each point of interest. Those closer than the threshold could be deemed to be a “match”. Other known types of calculations may be employed. See the description of Digital Fingerprinting, below, for more detail.


The technology disclosed herein may differentiate between several types of changes in physical objects (based on the way the points of interest change) and provide a measurement not only of overall change but of different types or categories of changes. Some examples, mentioned elsewhere in this disclosure, include distinguishing and measuring additive and subtractive changes separately. Those examples are given but any kind of change that can be isolated based on digital fingerprinting is in view of the taught system.


While this disclosure is not limited to the use of visible light, most of the examples given use visible light. In most instances, the taught technology enables the use of visible light with no special equipment, which means that the hardware required to run the system may in many cases be consumer-level electronics (e.g. a smart phone). The ability to use relatively inexpensive equipment provides a considerable benefit over current approaches, many of which require specialized and often expensive equipment.


In view in the disclosure is the ability of multiple users at different times and places to contribute to a conceptual map of where (or when, how, why, etc.) changes to an object occur, which may enable a better understanding of the history of the object and for that history to be recorded. For example, the system of FIG. 1 illustrates use of a remote system and smartphone (172) camera to capture image data of a physical object at virtually any time and location. Also, the taught system may enable the detection of places where, for example, excessive wear is occurring, thereby enabling possible fixes. Thus, the teachings of this disclosure may apply throughout a manufacturing process but also through any process (such as a distribution network) wherein the object is handled and may undergo intentional or unintentional change.



FIG. 1 is a simplified block diagram of one example of a system to detect and measure changes in physical objects using digital fingerprints. In this example, a set of physical objects 100 of the same class are collected. The objects are presented one at a time into the field of view of a suitable imager (scanner, camera, etc.) 102 to acquire image data. The image data is input to a digital fingerprinting process 104 to form a digital fingerprint for each object. The digital fingerprinting process 104 may be integrated into the imager, stored in the server 110, etc. It can be a remote process. The resulting set of digital fingerprints are provided via path 112 to an object change server 110. Any suitable computer server can be provisioned to function as an object change server. It may be local or provisioned “in the cloud.” In this example, the server 110 is coupled by its communications component 150 to a network 160 which may be LAN, WAN, internet, etc. Almost any digital networking hardware and communication protocols can be used.


A remote induction process 162 may be provisioned and coupled to the server 110 via the network 160. This enables inducting, i.e., capturing image data of an object from a remote location, and adding the corresponding digital fingerprint of the remote object, to the server 110. The server 110 may store digital fingerprint in a coupled datastore 116, which again may be local or provisioned in the cloud. The object change server 110 can store data in the datastore 116 including digital fingerprint data, to accomplish the functions described herein. For example, the datastore can maintain class digital fingerprints, individual (selected object) digital fingerprints, including multiple versions over time; reference object or digital fingerprints or digital models of an object.


The matching process can be carried out, for example, by an analysis component 144. It may use a query manager 142 to access various records as needed in the datastore 116. Results such as match measurements, deterioration metrics, and object histories and be stored and update in the datastore. In one scenario, a remote user 170 may capture an object image using a smartphone 172. The image data can be used to generate a corresponding digital fingerprint, locally on the phone, or in the user system 170, or a remote induction component 162, or at the server 110. The object digital fingerprint can be stored in the datastore. It can be used to query the datastore to find a matching record from an earlier induction. It can be used to compare to that earlier record, to determine changes, and optionally to update the object history.


Objects with Substantially Similar Background Features

Two categories of objects are considered: First, those that are members of a group of similar objects where a substantial portion of the digital fingerprint of an individual object is shared by the similar objects; and second, those objects where there is no such sharing. Whether or not an object shares a “substantial portion” of the digital fingerprint with another may be measured by a threshold value, may be defined by a model or a template, or determined by some other means of measure. Shared or potentially-shared portions of digital fingerprints are, for the purposes of this disclosure, called “background features,” although the name is not critical or to be taken literally. In general (though not exclusively) the background features make different members of the group look similar even at fairly detailed levels of examination. A “fairly detailed level of examination” of a physical object may utilize on the order of 5-10× magnification, although these values are not critical. An example of a set of objects that has significant background features are currency bills. In fact, background features are commonly intentionally-added characteristics in paper currency. For example, in the case of a dollar bill, its background features are consistent enough to distinguish a one-dollar bill from a twenty-dollar bill and would include such components as Washington's face and the numerical “1” at the four corners of the bill.



FIG. 2 is an image of a portion of a dollar bill with digital fingerprinting points of interest identified by superimposed circles on the image. In this example, the points of interest are almost entirely located around “background” features. In one embodiment, the size of an individual superimposed circle reflects the size of the point of interest. That is, the size of the circle is proportional to the scale at which the corresponding point of interest was found. The scale of a point of interest is the size of the radius around a point at which the Laplacian function (used to locate the point of interest) is strongest as one varies the size of the region.


In more detail, in one embodiment, the Laplacian is used to calculate the curvature at each point on the surface. To do this, the Laplacian requires two things: it needs to know the point at its center and it needs to know how far out to look when calculating it. The latter is the “scale” of the Laplacian. One may pass a Laplacian of a given scale over the surface and choose as the location of the points of interest the local (absolute value of the) maximum of the Laplacian. This is done for various scales. At a given point, as the Laplacian scale goes from very small to very large (over a pre-specified range of scales) there is some scale where its absolute value is largest. That value of the scale is then chosen. The point of interest circle indicates that scale.


The examples and descriptions provided here are not meant to be limiting in any way but rather to provide a clear visual of the kind of object being referred to. The digital fingerprints of currency bills often have points of interest whose locations are shared among many such bills and whose characterizing feature vectors are relatively similar as well. The reason for the similarity is that the object has features which are intentional features common to the group of objects and the digital fingerprinting process is finding those common features. In general, such points of interest are close physically (meaning they are located at the same point on each dollar bill) but vary in characterization (since things like paper fibers and ink bleeds cause the feature vectors of extracted digital features to vary significantly).


Objects that do not have Substantially Similar Background Features


The second category is objects that do not have significant shared background features. Machined part surfaces commonly fall into this category. To the naked eye, machined parts may appear to be extremely similar—for example, different brake pads of the same lot—but when viewed at a resolution that on a dollar bill would show, for example, the background features of George Washington's face, such different parts of the same kind show little or no common features. One reason is that, for example, machined part surfaces may consist primarily of casting or machining marks which show little or no commonality across members of the type of object.



FIG. 3 is an image of a portion of a turbine blade with digital fingerprinting points of interest superimposed as circles on the image. The image shows a section of a turbine blade. With very few exceptions, all the points marked on the photo are characteristics of the surface of the individual blade and not of a class of turbine blades. Very few exceptions may be seen along the dark line across the image. In many cases it is obvious what surface features are responsible for the point of interest. In this case, essentially all the points of interest are independent in both position and value (though obviously tied to the surface of the object). They are useful in identifying the individual turbine blade but useless in determining that the object is a turbine blade. Essentially all points of interest are dissimilar or “foreground” features.


In some embodiments, the image may be displayed in color to provide more information. The dark line is actually in the image. In one example, a first color may be used to display circles that are above a certain strength or threshold value (“strong points of interest”), while a second color may be used to show weaker points of interest, i.e., those below the threshold value. In another example, additional colors, for example, four or five colors, may be applied to points of interest circles in a visual display to show a (quantized) range of strength values. The “strength” of a point of interest is correlated to the ability to find it under different conditions of angle and illumination. The use of visual displays including circles (or other shapes), colors, etc. may help human users to understand the computerized processes at work.


Two different processes for detecting and measuring changes to an object are described in more detail below. Which process to apply depends on the category of objects to be considered. It should be kept in mind that while two distinct categories are described herein, in practice, objects generally fall within a range from one to the other. One example is coin grading, where both addition and subtraction may be equally important. While different approaches are described for the two categories, it is anticipated that for many objects a combination of approaches will be required.


The system taught herein works when the change is negative or subtractive-such as when parts of the surface abrade—as well as when the change is additive. In either case, the match with the reference digital fingerprint will degrade. In the former case, degradation is caused by removal of the surface feature that was characterized as a point of interest in the reference fingerprint. In the latter case, that surface feature has been covered over by an additive process.


In one embodiment of this disclosure, features common to multiple examples of a class of objects are identified and common points of interest are identified and characterized and then used for determining changes in members of that class of objects. At different times in the object's life cycle its digital fingerprint is compared with the digital fingerprint composed of the common points of interest and the change in correspondence to those common points measured and used as a measurement of the change in the object. This is particularly useful when the object being measured was not inducted (i.e. digitally fingerprinted) when new.


One advantage of the approach of using common points is the ability to produce a degradation measuring system that does not require that a particular object (of this kind) be inducted as a reference. Instead of measuring degradation of a particular object from its earlier induction, the degradation in the object's match with a group digital fingerprint comprising the common points is measured. An example of this type of process follows.


Process for a Class of Objects with a Substantial Number of Common Points of Interest



FIG. 5 is a simplified flow diagram of a method to detect and measure degradation in physical objects using digital fingerprints. A set of, preferably, new objects (although it is not strictly necessary that the objects be new) that are all members of a class with a substantial number of common points is selected, block 502. The size of the set is chosen such that adding more members does not significantly increase the number of common points. Each member object of the set is inducted, and its digital fingerprint is extracted. Though the digital fingerprint may contain many additional types of information, for the purposes of this disclosure, points of interest that have (up to invariances) the same, or very similar, location in multiple members of the class are considered. The exact number of times the point will occur in the class depends on the number of objects in the class-specific data set and the consistency and constitution of the background, but it has been found that, typically, if a point occurs in more than 14 members of such a class, it is likely to occur in a great deal more (if the class is large enough) and therefore is considered a common or “background” point or feature.


Meaning of “up to invariances”. It was stated above that the common points are in the same (or very similar) locations “up to invariances”. What this means depends on the type of invariance that the processing of the object achieves. Consider a dollar bill as an example. If an image of a dollar bill is always captured in the same orientation and with the same resolution, the common points of the dollar bill class will be in very nearly the same location in each image. If, however, the bill may be imaged in any orientation, the location of the common points is “the same” after correcting for the degree of rotation between member of the class. If the image can be captured at different resolutions, the common points are located in “the same location” up to a change in scale. Similar statements can be made for offset, affine, homography, projective, perspective, rubber sheet, and other variations. A point on one object is considered to be “in the same location” as a point on another object of the same class if, when a matching of digital fingerprints is performed, they are considered to be properly located for a match. In this description, when it is stated that two points on different class members are located in the same place, “up to invariances” should always be understood to be implied.


Now that a set of common points of interest are defined, a “class digital fingerprint” (or simply, “class fingerprint”) can be generated, so that the class fingerprint contains, preferably only, the common points of interest, block 504. The class fingerprint can be used for matching an object to a class (that is, for determining that a particular object is a member of a particular class whose class fingerprint the fingerprint of the object matches within a threshold). That is an important use, but it is not directly relevant to this section of the description. The class fingerprint may have all the points found (including, say, 15 of each located very close to each other), an average of the characteristics of each such point, or any other way of combining them. Significantly, when the process is completed, the outcome will include a set of points, many of which are likely to occur in any example of the object class.


For some applications, the selected set of objects should be objects that are as close as possible to a desired reference state. In many cases the preferred reference state will be “new”, but in the case of an object inducted previously in its life cycle, the reference state might be the same object (that is, the digital fingerprint of the same object) acquired and stored at an initial or earlier induction. As mentioned, the objects are compared with each other and the points of interest that are common to a significant number of them are preserved in a class fingerprint.


One benefit of having background features and using them to create a class fingerprint is that the degree of change of a member of the class may be determined without the individual object having been inducted when it was new. However well or poorly it matches the class fingerprint (compared to how well other items known to be worn out or in good condition matched and/or compared with its previous matches), measures the degree of wear.


Referring again to FIG. 5, next, a member of the class that was not part of the original set is selected, block 506. This selected member should preferably be in a known condition, preferably in “new” condition, although any condition may be of use. A (first) digital fingerprint of the selected member is acquired, block 508. That digital fingerprint is compared to the class fingerprint, block 510 and a measure of match is determined based on the comparison, block 512, and recorded, block 514. As the selected member experiences wear and tear, it is repeatedly measured against the class fingerprint. In FIG. 5, see block 516, creating a second digital fingerprint at a second time. Comparing the second digital fingerprint to the class digital fingerprint, block 518; determining a second measure of match, block 520; recording the second measure of match, block 522; and comparing the first and second measures of match to determine a deterioration of match metric, block 524.


The deterioration of the selected member's match to the class fingerprint is used as a measure or metric of its degree of change, see block 526. Determining that the object is worn out may either be done by training—that is, by seeing how much degradation other objects in the class have undergone before a human decides they are worn out—or by a predetermined loss of matching points between the object and the class digital fingerprint, or by some other method. When a sufficiently high level of degradation occurs in a tested object, the object is declared to be worn out. These processes may be realized in software, for example, in an analysis component 144 in the object change server 110 in FIG. 1.


Process for a Class of Objects without a Substantial Number of Common Points of Interest


In an embodiment an object is inducted when new (or other known state) and measurement of its change is based on how its own digital fingerprint changes from the originally-inducted one. That digital fingerprint may comprise only those points of interest that occur in this object (and not in similar objects) or it may comprise all points of interest on the object, or, comprise different subsets of such points.


It should be kept in mind that this approach may be used for any object whether or not it belongs to a class that has common points. It is possible the use of common points of interest (in the former embodiment) as well as individual object points of interest in a given application.


When there are few or no “common features” or “background features” creating a class digital fingerprint is impossible, so the approach of the previous section is also impossible. When everything is foreground (or treated as though it were foreground), the object must be inducted when the object is in a known condition, preferably when it is new, although any condition may be acceptable. Change, therefore, is measured as the change from the originally-inducted digital fingerprint. A turbine blade is one example, but this approach works for almost any object, including those with substantial background points or features.


In this case an object is inducted when in a known condition, preferably in “new” condition for measuring degradation, or, as another example, “prior to anodizing” if anodizing will be taking place, and so on. Later, when the object has been worn or anodized or otherwise altered, the object is inducted again to measure the difference in match points from when it was initially inducted.



FIG. 4 is a simplified flow diagram illustrating a method to detect and measure degradation in physical objects using digital fingerprints, without a class digital fingerprint. At block 402, the process calls for inducting a physical object in a known initial condition into a digital fingerprinting system. Inducting the physical object includes generating and storing a first digital fingerprint of the object at a first time, block 406. This may be called a reference digital fingerprint. Next comes re-inducting the object at a second or next time after the first time, to form a second or next digital fingerprint of the object, block 408. The second or next digital fingerprint is stored in the datastore, block 410. In this description and the claims, phrases like “a second/next digital fingerprint” and “a first/next comparison” etc. are used to indicate a repeating process, where the “next digital fingerprint” for example, would correspond to a second, third, fourth, etc. digital fingerprint as a repeating process or “loop” is executed. Such a loop is indicated in FIG. 4 from the decision 418 back to block 408.


At block 412, the process continues by comparing the second/next digital fingerprint of the object to a first/next preceding digital fingerprint to form a first/next comparison. Block 414 calls for determining a first/next measure of change of the object between the first/next time and the second/next subsequent time based on the first/next comparison. Block 416 calls for recording the first/next measure of change to accumulate measures of change of the object over time, namely, from the first time to the last (most recent) next time.


A decision 418 determines whether the manufacturing process has completed. If so, the process may terminate, block 422. If not, the illustrated method may provide feedback to the manufacturing process, block 420. That is, it may provide information based on changes to the object during the manufacturing process, for example, if a plating process is specified to continue until a certain measure of change is accomplished, the feedback may indicate that the measure of change has occurred and thus the plating process is completed. Assuming the process is not completed, the method loops back to block 408 to again re-induct the object.


Training


Determining how much measured change is sufficient to, say, remove a bill or turbine blade from service or show that the anodizing process has been successful can be done in various ways, any of which are in view in this disclosure. Two approaches are worth highlighting. First, a degree of change of the test digital fingerprint from the reference one may be set. To use the wear-and-tear example of a turbine blade, when, for example, half the points originally seen are no longer there to match the reference, that level of change may be defined as a prompt to replace the blade. This makes sense because half the points missing is indicative of half the surface having been substantially altered. In category one (the category with a large number of background points) it is indicative that half the points that make the object a dollar bill (or whatever) are missing so it may be time to recycle the bill. Note that the use of “half” here is purely arbitrary. The number of points may be set at any level and by any manner, including manually, as a result of experience, as a result of a training process, or by any other method.


The training process approach involves training the system to determine the action to take based on a given degree of change. Again, the wear and tear surface erosion example will be used, but the example is not meant to be limiting. One way to do the training is to induct a set of objects and follow them through their life cycle, measuring them as time passes until they are ready to be replaced. The degradation score at their points of replacement (or the aggregate or average of such scores) may then be used to define a norm or a standard. A more sophisticated training system may be envisioned where a neural net or other machine learning system is trained with objects at various parts of their life cycles and, after the training cycle is complete, the match of the object to the class digital fingerprint (first category) or to its initial induction (second category) is put into the learning system and the system indicates, for example, how much life is left in the object or that it is time to replace the object. A similar training system may be envisioned for use in additive processes as well.


Non-Limiting Sample Use Cases

In addition to the currency bill and turbine blade examples explored above, many other embodiments are possible. The teachings of this disclosure apply wherever measurement of change in the surface features of a physical object is useful.


Plating. In addition to anodizing, mentioned above, other methods of obscuring the surface are in view. The obscuring may be unintentional or intentional. Plating is one example. Depending on how thick the plating is supposed to be, the object may be inducted once or many times in the plating process.


In an embodiment, an object's surface may need to be covered with new material that is too thick to see through. There are two uses for digital fingerprinting here. First, the teachings of this disclosure can be used to determine the progress and effectiveness of such additions. In this case, each time the previous surface “disappears” to some predetermined amount, the object is re-inducted, and a new reference preserved. When that surface in turn partially “disappears”, further induction is performed and so on until the addition process is complete. The second use is to re-induct the part when the new surface is finished so that, going forward, the part's identity can be established. As such, in this embodiment the teachings of this disclosure are used to both ensure the additive process is working correctly and, when that process is finished, to ensure that the system can continue to identify the part as it moves through further manufacturing stages.


Item aging. Aging here is a stand-in term for conditions where the object changes not because it is added to or abraded but simply as a result of the passage of time, its interaction with the environment, its regrowth/regeneration (such as in the case of skin or living material), and so on. In this case, surface points may be added or subtracted by the aging process and the degree of change measured not just in terms of points of interest lost but also points of interest that first appear between inductions.


Coin (and other object) grading. Many objects are graded to determine how closely they conform to a standard object. Coins are examples of this. In general, the more the coin is worn (in the case of circulated coins) or scarred by, say, other coins in a bag (uncirculated coins), the less well they will conform.


Coins are graded somewhat differently depending on their type. Proof coins are (generally) double stamped and stamped with mirrored dies. They are not meant for general circulation. Proof coins are graded from PR-60 to PR-70 (impaired or rubbed proof coins may have a lower grade). Uncirculated regular coins are coins that have never been in general circulation. Uncirculated coin grades run from MS-60 to MS-70, which correspond in quality and amount of degradation to the PR categories for proof coins.


Coins that have been in general circulation are graded in a range from 1 to 59, 50 and up are called “about uncirculated” (AU). 40-49 are “extremely fine” (XF). 20-39 are “very fine” (VF). Below that are “fine” (F), “very good” (VG), “good” (G), “about good” (AG), “fair” (FR), “poor” (P), and “ungradable”. Examples are given below.


Coins fit into the first category of objects with similar background features, described above as coins have a large quantity of shared “background” points. It is clear from the images that degradation from initial conditions depends on both addition (patina, scratch marks) and subtraction (wear marks). The taught system may be used to carry out coin grading by having coins graded by an expert and the digital fingerprint of the coins extracted. A learning system is then programmed based on the grade and the measured change from an ideal example of that coin type. When a coin of that type is to be graded, its digital fingerprint is extracted and additions and subtractions from the standard measured, the results fed through the learning system, and the grade produced.


Digital Fingerprinting

“Digital fingerprinting” refers to the creation and use of digital records (digital fingerprints) derived from properties of a physical object, which digital records are typically stored in a database. Digital fingerprints maybe used to reliably and unambiguously identify or authenticate corresponding physical objects, track them through supply chains, record their provenance and changes over time, and for many other uses and applications.


Digital fingerprints store information, preferably in the form of numbers or “feature vectors,” that describes features that appear at particular locations, called points of interest, of a two-dimensional (2-D) or three-dimensional (3-D) object. In the case of a 2-D object, the points of interest are preferably on a surface of the corresponding object; in the 3-D case, the points of interest may be on the surface or in the interior of the object. In some applications, an object “feature template” may be used to define locations or regions of interest for a class of objects. The digital fingerprints may be derived or generated from digital data of the object which may be, for example, image data.


While the data from which digital fingerprints are derived is often images, a digital fingerprint may contain digital representations of any data derived from or associated with the object. For example, digital fingerprint data may be derived from an audio file. That audio file in turn may be associated or linked in a database to an object. Thus, in general, a digital fingerprint may be derived from a first object directly, or it may be derived from a different object (or file) linked to the first object, or a combination of the two (or more) sources. In the audio example, the audio file may be a recording of a person speaking a particular phrase. The digital fingerprint of the audio recording may be stored as part of a digital fingerprint of the person speaking. The digital fingerprint (of the person) may be used as part of a system and method to later identify or authenticate that person, based on their speaking the same phrase, in combination with other sources.


Returning to the 2-D and 3-D object examples mentioned above, feature extraction or feature detection may be used to characterize points of interest. In an embodiment, this may be done in various ways. Two examples include Scale-Invariant Feature Transform (or SIFT) and Speeded Up Robust features (or SURF). Both are described in the literature. For example: “Feature detection and matching are used in image registration, object tracking, object retrieval etc. There are number of approaches used to detect and matching of features as SIFT (Scale Invariant Feature Transform), SURF (Speeded up Robust Feature), FAST, ORB etc. SIFT and SURF are most useful approaches to detect and matching of features because of it is invariant to scale, rotate, translation, illumination, and blur.” MISTRY, Darshana et al., Comparison of Feature Detection and Matching Approaches: SIFT and SURF, GRD Journals—Global Research and Development Journal for Engineering|Volume 2|Issue 4|March 2017.


In an embodiment, features may be used to represent information derived from a digital image in a machine-readable and useful way. Features may be point, line, edges, and blob of an image etc. There are areas as image registration, object tracking, and object retrieval etc. that require a system or processor to detect and match correct features. Therefore, it may be desirable to find features in ways that are invariant to rotation, scale, translation, illumination, noisy and blur images. The search of interest points from one object image to corresponding images can be very challenging work. The search may preferably be done such that same physical interest points can be found in different views. Once located, points of interest and their respective characteristics may be aggregated to form the digital fingerprint (generally including 2-D or 3-D location parameters).


In an embodiment, features may be matched, for example, based on finding a minimum threshold distance. Distances can be found using Euclidean distance, Manhattan distance etc. If distances of two points are less than a prescribed minimum threshold distance, those key points may be known as matching pairs. Matching a digital fingerprint may comprise assessing a number of matching pairs, their locations or distance and other characteristics. Many points may be assessed to calculate a likelihood of a match, since, generally, a perfect match will not be found. In some applications an “feature template” may be used to define locations or regions of interest for a class of objects.


Scanning


In this application, the term “scan” is used in the broadest sense, referring to any and all means for capturing an image or set of images, which may be in digital form or transformed into digital form. Images may, for example, be two dimensional, three dimensional, or in the form of a video. Thus a “scan” may refer to an image (or digital data that defines an image) captured by a scanner, a camera, a specially adapted sensor or sensor array (such as a CCD array), a microscope, a smartphone camera, a video camera, an x-ray machine, a sonar, an ultrasound machine, a microphone (or other instruments for converting sound waves into electrical energy variations), etc. Broadly, any device that can sense and capture either electromagnetic radiation or mechanical wave that has traveled through an object or reflected off an object or any other means to capture surface or internal structure of an object is a candidate to create a “scan” of an object. Various means to extract “fingerprints” or features from an object may be used; for example, through sound, physical structure, chemical composition, or many others. The remainder of this application will use terms like “image” but when doing so, the broader uses of this technology should be implied. In other words, alternative means to extract “fingerprints” or features from an object should be considered equivalents within the scope of this disclosure. Similarly, terms such as “scanner” and “scanning equipment” herein may be used in a broad sense to refer to any equipment capable of carrying out “scans” as defined above, or to equipment that carries out “scans” as defined above as part of their function.


More information about digital fingerprinting can be found in various patents and publications assigned to Alitheon, Inc. including, for example, the following: DIGITAL FINGERPRINTING, U.S. Pat. No. 8,6109,762; OBJECT IDENTIFICATION AND INVENTORY MANAGEMENT, U.S. Pat. No. 9,152,862; DIGITAL FINGERPRINTING OBJECT AUTHENTICATION AND ANTI-COUNTERFEITING SYSTEM, U.S. Pat. No. 9,443,298; PERSONAL HISTORY IN TRACK AND TRACE SYSTEM, U.S. Pat. No. 10,037,537; PRESERVING AUTHENTICATION UNDER ITEM CHANGE, U.S. Pat. App. Pub. No. 2017-0243230 A1. These references are incorporated herein by this reference.


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated.


The system and method disclosed herein may be implemented via one or more components, systems, servers, appliances, other subcomponents, or distributed between such elements. When implemented as a system, such systems may include an/or involve, inter alia, components such as software modules, general-purpose CPU, RAM, etc. found in general-purpose computers. In implementations where the innovations reside on a server, such a server may include or involve components such as CPU, RAM, etc., such as those found in general-purpose computers.


Additionally, the system and method herein may be achieved via implementations with disparate or entirely different software, hardware and/or firmware components, beyond that set forth above. With regard to such other components (e.g., software, processing components, etc.) and/or computer-readable media associated with or embodying the present disclosure, for example, aspects of the innovations herein may be implemented consistent with numerous general purpose or special purpose computing systems or configurations. Various exemplary computing systems, environments, and/or configurations that may be suitable for use with the innovations herein may include, but are not limited to: software or other components within or embodied on personal computers, servers or server computing devices such as routing/connectivity components, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, consumer electronic devices, network PCs, other existing computer platforms, distributed computing environments that include one or more of the above systems or devices, etc.


In some instances, aspects of the system and method may be achieved via or performed by logic and/or logic instructions including program modules, executed in association with such components or circuitry, for example. In general, program modules may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular instructions herein. The disclosures may also be practiced in the context of distributed software, computer, or circuit settings where circuitry is connected via communication buses, circuitry or links. In distributed settings, control/instructions may occur from both local and remote computer storage media including memory storage devices.


The software, circuitry and components herein may also include and/or utilize one or more type of computer readable media. Computer readable media can be any available media that is resident on, associable with, or can be accessed by such circuits and/or computing components. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and can accessed by computing component. Communication media may comprise computer readable instructions, data structures, program modules and/or other components. Further, communication media may include wired media such as a wired network or direct-wired connection, however no media of any such type herein includes transitory media. Combinations of the any of the above are also included within the scope of computer readable media.


In the present description, the terms component, module, device, etc. may refer to any type of logical or functional software elements, circuits, blocks and/or processes that may be implemented in a variety of ways. For example, the functions of various circuits and/or blocks can be combined with one another into any other number of modules. Each module may even be implemented as a software program stored on a tangible memory (e.g., random access memory, read only memory, CD-ROM memory, hard disk drive, etc.) to be read by a central processing unit to implement the functions of the innovations herein. Or, the modules can comprise programming instructions transmitted to a general-purpose computer or to processing/graphics hardware via a transmission carrier wave. Also, the modules can be implemented as hardware logic circuitry implementing the functions encompassed by the innovations herein. Finally, the modules can be implemented using special purpose instructions (SIMD instructions), field programmable logic arrays or any mix thereof which provides the desired level performance and cost.


As disclosed herein, features consistent with the disclosure may be implemented via computer-hardware, software and/or firmware. For example, the systems and methods disclosed herein may be embodied in various forms including, for example, a data processor, such as a computer that also includes a database, digital electronic circuitry, firmware, software, or in combinations of them. Further, while some of the disclosed implementations describe specific hardware components, systems and methods consistent with the innovations herein may be implemented with any combination of hardware, software and/or firmware. Moreover, the above-noted features and other aspects and principles of the innovations herein may be implemented in various environments. Such environments and related applications may be specially constructed for performing the various routines, processes and/or operations according to the present disclosure or they may include a general-purpose computer or computing platform selectively activated or reconfigured by code to provide the necessary functionality. The processes disclosed herein are not inherently related to any particular computer, network, architecture, environment, or other apparatus, and may be implemented by a suitable combination of hardware, software, and/or firmware. For example, various general-purpose machines may be used with programs written in accordance with teachings of the present disclosure, or it may be more convenient to construct a specialized apparatus or system to perform the required methods and techniques.


Aspects of the method and system described herein, such as the logic, may also be implemented as functionality programmed into any of a variety of circuitry, including programmable logic devices (“PLDs”), such as field programmable gate arrays (“FPGAs”), programmable array logic (“PAL”) devices, electrically programmable logic and memory devices and standard cell-based devices, as well as application specific integrated circuits. Some other possibilities for implementing aspects include memory devices, microcontrollers with memory (such as EEPROM), embedded microprocessors, firmware, software, etc. Furthermore, aspects may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. The underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (“MOSFET”) technologies like complementary metal-oxide semiconductor (“CMOS”), bipolar technologies like emitter-coupled logic (“ECL”), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, and so on.


It should also be noted that the various logic and/or functions disclosed herein may be enabled using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, non-volatile storage media in various forms (e.g., optical, magnetic or semiconductor storage media) though again does not include transitory media. Unless the context clearly requires otherwise, throughout the description, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.


Although certain presently preferred implementations of the present disclosure have been specifically described herein, it will be apparent to those skilled in the art to which the present disclosure pertains that variations and modifications of the various implementations shown and described herein may be made without departing from the spirit and scope of the present disclosure. Accordingly, it is intended that the present disclosure be limited only to the extent required by the applicable rules of law.


While the foregoing has been with reference to a particular embodiment of the disclosure, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.

Claims
  • 1. A processor-based system to assess objects that may change over time, the processor-based system comprising: at least one processor; andat least one nontransitory processor-readable medium communicatively coupled to the at least one processor, the at least one nontransitory processor-readable medium which stores a dataset and at least one set of processor-executable instructions, the dataset which includes at least one reference digital fingerprint that represents the object under assessment in a known condition at a time prior to a time of assessment or that represents a number of other objects in a known condition at a time prior to the time of assessment, the number of other objects each being a same type of object the object under assessment, the processor-executable instructions which, when executed by the at least one processor, cause the at least one processor to: for a digital fingerprint of an object under assessment wherein the digital fingerprint represents the object under assessment at the time of assessment of the object,compare the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the object under assessment in the known condition at the time prior to the time of assessment or that represents the number of other objects in the known condition at the time prior to the time of assessment;determine at least one metric that represents a measure of change in the object under assessment based on the comparison;determine a condition of the object based at least in part on the determined measure of change in the object under assessment;update the dataset to reflect the determined at least one metric that represents the measure of change in the object under assessment; andrepeatedly update the dataset to reflect the determined at least one metric that represents the measure of change in the object under assessment for a plurality of successive times following the time of assessment, to accumulate measures of change in the object under assessment over time.
  • 2. The processor-based system of claim 1 wherein, when executed by the at least one processor, the processor-executable instructions cause the at least one processor to: determine an amount of wear that the object has been subjected to over time, the amount of wear representative of the condition of the object.
  • 3. The processor-based system of claim 1 wherein, when executed by the at least one processor, the processor-executable instructions cause the at least one processor to: determining an amount of subtractive manufacturing that the object has been subjected to over time, the amount of subtractive manufacturing representative of the condition of the object.
  • 4. The processor-based system of claim 1 wherein, when executed by the at least one processor, the processor-executable instructions cause the at least one processor to: determining an amount of additive manufacturing that the object has been subjected to over time, the amount of additive manufacturing representative of the condition of the object.
  • 5. The processor-based system of claim 1 wherein to compare the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the object under assessment in the known condition at the time prior to the time of assessment or that represents the number of other objects in the known condition at the time prior to the time of assessment the at least one processor: compares the digital fingerprint of the object under assessment with two or more reference digital fingerprints that respectively represent the object under assessment in respective ones of two or more known conditions at respective times which occur prior to the time of assessment.
  • 6. The processor-based system of claim 1 wherein the dataset includes at least one class reference digital fingerprint that represents a class of object, the reference digital fingerprint which represents a set of two or more of the other objects in the known condition at the time that occurs prior to the time of assessment.
  • 7. The processor-based system of claim 1 wherein the dataset includes at least one reference digital fingerprint that comprises at respective feature vector for each of a plurality of points of interest captured from the object under assessment in a respective known condition at each of one or more times prior to the time of assessment.
  • 8. The processor-based system of claim 1 wherein to compare the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the number of other objects in the known condition at the time prior to the time of assessment the at least one processor: for each point of interest, determines, a corresponding feature vector distance between the feature vector for the respective point of interest of the at least one reference digital fingerprint and a feature vector for the respective point of interest of the digital fingerprint of the object under assessment.
  • 9. The processor-based system of claim 1 wherein to determine at least one metric that represents a measure of change in the object under assessment based on the comparison the at least one processor: determine a total number of points of interests of the digital fingerprint of the object under assessment that match respective point of interest of the at least one reference digital fingerprint.
  • 10. A method of operation of a processor-based system to assess objects that may change over time, the method comprising: receiving a digital fingerprint of an object under assessment wherein the digital fingerprint represents the object under assessment at a first time of assessment of the object;accessing a stored dataset that includes at least one reference digital fingerprint that represents the object under assessment in a known condition at a time prior to the first time of assessment or that represents a number of other objects in a known condition at a time prior to the first time of assessment, the number of other objects each being a same type of object as the object under assessment;comparing, via at least one processor, the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the object under assessment in the known condition at the time prior to the first time of assessment or that represents the number of other objects in the known condition at the time prior to the first time of assessment;determining, via at least one processor, at least one metric that represents a measure of change in the object under assessment based on the comparison;determining a condition of the object based at least in part on the determined measure of change in the object under assessment;updating the dataset to reflect the determined at least one metric that represents the measure of change in the object under assessment; andrepeatedly updating the dataset to reflect the determined at least one metric that represents the measure of change in the object under assessment for a plurality of successive times following the first time of assessment, to accumulate measures of change in the object under assessment over time.
  • 11. The method of claim 10 wherein determining a condition of the object based at least in part on the determined measure of change in the object under assessment includes determining an amount of wear that the object has been subjected to over time.
  • 12. The method of claim 10 wherein determining a condition of the object based at least in part on the determined measure of change in the object under assessment includes determining an amount of subtractive manufacturing that the object has been subjected to over time.
  • 13. The method of claim 10 wherein determining a condition of the object based at least in part on the determined measure of change in the object under assessment includes determining an amount of additive manufacturing that the object has been subjected to over time.
  • 14. The method of claim 10 wherein comparing the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the object under assessment in the known condition at the time prior to the first time of assessment or that represents the number of other objects in the known condition at the time prior to the first time of assessment includes: comparing the digital fingerprint of the object under assessment with two or more reference digital fingerprints that respectively represent the object under assessment in respective ones of two or more known conditions at respective times which occur prior to the first time of assessment.
  • 15. The method of claim 10 wherein comparing the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the object under assessment in the known condition at the time prior to the first time of assessment or that represents the number of other objects in the known condition at the time prior to the first time of assessment includes: comparing the digital fingerprint of the object under assessment the reference digital fingerprint that respectively represents a class of object, the reference digital fingerprint which represents a set of two or more of the other objects in the known condition at the time that occurs prior to the first time of assessment.
  • 16. The method of claim 10 wherein accessing a stored dataset includes assessing a stored dataset that includes at least one class reference digital fingerprint that represents a class of object, the reference digital fingerprint which represents a set of two or more of the other objects in the known condition at the time that occurs prior to the first time of assessment.
  • 17. The method of claim 10 wherein accessing a stored dataset includes assessing a stored dataset that includes at least one reference digital fingerprint that comprises at respective feature vector for each of a plurality of points of interest captured from two or more other objects in a respective known condition at a time prior to the first time of assessment.
  • 18. The method of claim 10 wherein comparing the digital fingerprint of the object under assessment with the at least one reference digital fingerprint that represents the number of other objects in the known condition at the time prior to the first time of assessment includes: for each point of interest, determining, a corresponding feature vector distance between the feature vector for the respective point of interest of the at least one reference digital fingerprint and a feature vector for the respective point of interest of the digital fingerprint of the object under assessment.
  • 19. The method of claim 10 wherein determining at least one metric that represents a measure of change in the object under assessment based on the comparison includes determining a number of points of interests of the digital fingerprint of the object under assessment that match respective point of interest of the at least one reference digital fingerprint.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a Continuation of U.S. application Ser. No. 17/189,470, filed Mar. 2, 2021 (ref 680-US-C), which is a Continuation of U.S. application Ser. No. 16/780,882, filed Feb. 3, 2020 (ref 680-US), which is a non-provisional of and claims priority, pursuant to 35 U.S.C. § 119(e), to U.S. Application No. 62/802,177, filed Feb. 6, 2019 (ref 680-P), all of which are hereby incorporated by reference as though fully set forth.

US Referenced Citations (359)
Number Name Date Kind
4218674 Brosow et al. Aug 1980 A
4423415 Goldman Dec 1983 A
4677435 Causse et al. Jun 1987 A
4700400 Ross Oct 1987 A
4883971 Jensen Nov 1989 A
4921107 Hofer May 1990 A
5031223 Rosenbaum et al. Jul 1991 A
5079714 Manduley et al. Jan 1992 A
5393939 Nasuta et al. Feb 1995 A
5422821 Allen et al. Jun 1995 A
5514863 Williams May 1996 A
5518122 Tilles et al. May 1996 A
5521984 Denenberg et al. May 1996 A
5703783 Allen et al. Dec 1997 A
5719939 Tel Feb 1998 A
5734568 Borgendale et al. Mar 1998 A
5745590 Pollard Apr 1998 A
5883971 Bolle et al. Mar 1999 A
5923848 Goodhand et al. Jul 1999 A
5974150 Kaish et al. Oct 1999 A
6205261 Goldberg Mar 2001 B1
6246794 Kagehiro et al. Jun 2001 B1
6292709 Uhl et al. Sep 2001 B1
6327373 Yura Dec 2001 B1
6343327 Daniels et al. Jan 2002 B2
6360001 Berger et al. Mar 2002 B1
6370259 Hobson et al. Apr 2002 B1
6400805 Brown et al. Jun 2002 B1
6421453 Kanevsky et al. Jul 2002 B1
6424728 Ammar Jul 2002 B1
6434601 Rollins Aug 2002 B1
6470091 Koga et al. Oct 2002 B2
6539098 Baker et al. Mar 2003 B1
6549892 Sansone Apr 2003 B1
6597809 Ross et al. Jul 2003 B1
6643648 Ross et al. Nov 2003 B1
6697500 Woolston et al. Feb 2004 B2
6741724 Bruce et al. May 2004 B1
6768810 Emanuelsson et al. Jul 2004 B2
6778703 Zlotnick Aug 2004 B1
6805926 Cote et al. Oct 2004 B2
6816602 Coffelt et al. Nov 2004 B2
6829369 Poulin et al. Dec 2004 B2
6937748 Schneider et al. Aug 2005 B1
6940391 Ishikura et al. Sep 2005 B1
6961466 Imagawa et al. Nov 2005 B2
6985925 Ogawa Jan 2006 B2
6985926 Ferlauto et al. Jan 2006 B1
7016532 Boncyk et al. Mar 2006 B2
7031519 Elmenhurst Apr 2006 B2
7096152 Ong Aug 2006 B1
7120302 Billester Oct 2006 B1
7121458 Avant et al. Oct 2006 B2
7152047 Nagel Dec 2006 B1
7171049 Snapp Jan 2007 B2
7204415 Payne et al. Apr 2007 B2
7212949 Bachrach May 2007 B2
7333987 Ross et al. Feb 2008 B2
7343623 Ross Mar 2008 B2
7356162 Caillon Apr 2008 B2
7379603 Ross et al. May 2008 B2
7436979 Bruce et al. Oct 2008 B2
7477780 Boncyk et al. Jan 2009 B2
7518080 Amato Apr 2009 B2
7602938 Prokoski Oct 2009 B2
7674995 Desprez et al. Mar 2010 B2
7676433 Ross et al. Mar 2010 B1
7680306 Boutant et al. Mar 2010 B2
7720256 Desprez et al. May 2010 B2
7726457 Maier et al. Jun 2010 B2
7726548 Delavergne Jun 2010 B2
7748029 Ross Jun 2010 B2
7822263 Prokoski Oct 2010 B1
7834289 Orbke et al. Nov 2010 B2
7853792 Cowburn Dec 2010 B2
8022832 Vogt et al. Sep 2011 B2
8032927 Ross Oct 2011 B2
8108309 Tan Jan 2012 B2
8162125 Csulits et al. Apr 2012 B1
8180174 Di et al. May 2012 B2
8180667 Baluja et al. May 2012 B1
8194938 Wechsler et al. Jun 2012 B2
8316418 Ross Nov 2012 B2
8374020 Katti Feb 2013 B2
8374399 Talwerdi Feb 2013 B1
8374920 Hedges et al. Feb 2013 B2
8391583 Mennie et al. Mar 2013 B1
8428772 Miette et al. Apr 2013 B2
8437530 Mennie et al. May 2013 B1
8457354 Kolar et al. Jun 2013 B1
8477992 Paul et al. Jul 2013 B2
8520888 Spitzig et al. Aug 2013 B2
8526743 Campbell et al. Sep 2013 B1
8774455 Elmenhurst et al. Jul 2014 B2
8856881 Mouleswaran et al. Oct 2014 B2
8959029 Jones et al. Feb 2015 B2
9031329 Farid et al. May 2015 B1
9058543 Campbell et al. Jun 2015 B2
9152862 Ross et al. Oct 2015 B2
9170654 Boncyk et al. Oct 2015 B2
9213845 Taraki et al. Dec 2015 B1
9224196 Duerksen et al. Dec 2015 B2
9234843 Sopori et al. Jan 2016 B2
9245133 Durst et al. Jan 2016 B1
9350552 Elmenhurst et al. May 2016 B2
9350714 Freeman et al. May 2016 B2
9361507 Hoyos et al. Jun 2016 B1
9361596 Ross et al. Jun 2016 B2
9424461 Yuan et al. Aug 2016 B1
9443298 Ross et al. Sep 2016 B2
9558463 Ross et al. Jan 2017 B2
9582714 Ross et al. Feb 2017 B2
9646206 Ross et al. May 2017 B2
9665800 Kuffner May 2017 B1
9741724 Seshadri et al. Aug 2017 B2
10037537 Withrow et al. Jul 2018 B2
10043073 Ross et al. Aug 2018 B2
10192140 Ross et al. Jan 2019 B2
10199886 Li et al. Feb 2019 B2
10275585 Fadell et al. Apr 2019 B2
10346852 Ross et al. Jul 2019 B2
10398370 Boshra et al. Sep 2019 B2
10505726 Andon et al. Dec 2019 B1
10540664 Ross et al. Jan 2020 B2
10572749 Bonev et al. Feb 2020 B1
10572883 Ross et al. Feb 2020 B2
10614302 Withrow et al. Apr 2020 B2
10621594 Land et al. Apr 2020 B2
10740767 Withrow Aug 2020 B2
10936838 Wong Mar 2021 B1
20010010334 Park et al. Aug 2001 A1
20010054031 Lee et al. Dec 2001 A1
20020015515 Lichtermann et al. Feb 2002 A1
20020073049 Dutta Jun 2002 A1
20020134836 Cash et al. Sep 2002 A1
20020168090 Bruce et al. Nov 2002 A1
20030015395 Hallowell et al. Jan 2003 A1
20030046103 Amato et al. Mar 2003 A1
20030091724 Mizoguchi May 2003 A1
20030120677 Vernon Jun 2003 A1
20030138128 Rhoads Jul 2003 A1
20030179931 Sun Sep 2003 A1
20030182018 Snapp Sep 2003 A1
20030208298 Edmonds Nov 2003 A1
20030219145 Smith Nov 2003 A1
20040027630 Lizotte Feb 2004 A1
20040101174 Sato et al. May 2004 A1
20040112962 Farrall et al. Jun 2004 A1
20040218791 Jiang et al. Nov 2004 A1
20040218801 Houle et al. Nov 2004 A1
20040250085 Tattan et al. Dec 2004 A1
20050007776 Monk et al. Jan 2005 A1
20050038756 Nagel Feb 2005 A1
20050065719 Khan et al. Mar 2005 A1
20050086256 Owens et al. Apr 2005 A1
20050111618 Sommer et al. May 2005 A1
20050119786 Kadaba Jun 2005 A1
20050125360 Tidwell et al. Jun 2005 A1
20050131576 De et al. Jun 2005 A1
20050137882 Cameron et al. Jun 2005 A1
20050160271 Brundage et al. Jul 2005 A9
20050169529 Owechko et al. Aug 2005 A1
20050188213 Xu Aug 2005 A1
20050204144 Mizutani Sep 2005 A1
20050251285 Boyce et al. Nov 2005 A1
20050257064 Boutant et al. Nov 2005 A1
20050289061 Kulakowski et al. Dec 2005 A1
20060010503 Inoue et al. Jan 2006 A1
20060083414 Neumann et al. Apr 2006 A1
20060109520 Gossaye et al. May 2006 A1
20060131518 Ross et al. Jun 2006 A1
20060165261 Pira Jul 2006 A1
20060177104 Prokoski Aug 2006 A1
20060253406 Caillon Nov 2006 A1
20070036470 Piersol et al. Feb 2007 A1
20070056041 Goodman Mar 2007 A1
20070071291 Yumoto et al. Mar 2007 A1
20070085710 Bousquet et al. Apr 2007 A1
20070094155 Dearing Apr 2007 A1
20070211651 Ahmed et al. Sep 2007 A1
20070211964 Agam et al. Sep 2007 A1
20070223791 Shinzaki Sep 2007 A1
20070230656 Lowes et al. Oct 2007 A1
20070263267 Ditt Nov 2007 A1
20070269043 Launay et al. Nov 2007 A1
20070282900 Owens et al. Dec 2007 A1
20080005578 Shafir Jan 2008 A1
20080008377 Andel et al. Jan 2008 A1
20080011841 Self et al. Jan 2008 A1
20080013804 Moon et al. Jan 2008 A1
20080016355 Beun et al. Jan 2008 A1
20080128496 Bertranou et al. Jun 2008 A1
20080130947 Ross et al. Jun 2008 A1
20080219503 Di et al. Sep 2008 A1
20080250483 Lee Oct 2008 A1
20080255758 Graham et al. Oct 2008 A1
20080272585 Conard et al. Nov 2008 A1
20080290005 Bennett et al. Nov 2008 A1
20080294474 Furka Nov 2008 A1
20090028379 Belanger et al. Jan 2009 A1
20090057207 Orbke et al. Mar 2009 A1
20090083850 Fadell et al. Mar 2009 A1
20090106042 Maytal et al. Apr 2009 A1
20090134222 Ikeda May 2009 A1
20090154778 Lei et al. Jun 2009 A1
20090157733 Kim et al. Jun 2009 A1
20090223099 Versteeg Sep 2009 A1
20090232361 Miller Sep 2009 A1
20090245652 Bastos Oct 2009 A1
20090271029 Doutre Oct 2009 A1
20090287498 Choi Nov 2009 A2
20090307005 Omartin et al. Dec 2009 A1
20100027834 Spitzig et al. Feb 2010 A1
20100054551 Decoux Mar 2010 A1
20100070527 Chen Mar 2010 A1
20100104200 Baras et al. Apr 2010 A1
20100157064 Cheng et al. Jun 2010 A1
20100163612 Caillon Jul 2010 A1
20100166303 Rahimi Jul 2010 A1
20100174406 Miette et al. Jul 2010 A1
20100286815 Zimmermann Nov 2010 A1
20110026831 Perronnin et al. Feb 2011 A1
20110049235 Gerigk et al. Mar 2011 A1
20110064279 Uno Mar 2011 A1
20110081043 Sabol et al. Apr 2011 A1
20110091068 Stuck et al. Apr 2011 A1
20110161117 Busque et al. Jun 2011 A1
20110188709 Gupta et al. Aug 2011 A1
20110194780 Li et al. Aug 2011 A1
20110235920 Iwamoto et al. Sep 2011 A1
20110267192 Goldman et al. Nov 2011 A1
20110291839 Cole Dec 2011 A1
20120011119 Baheti et al. Jan 2012 A1
20120042171 White et al. Feb 2012 A1
20120089639 Wang Apr 2012 A1
20120130868 Loeken May 2012 A1
20120177281 Frew Jul 2012 A1
20120185393 Atsmon et al. Jul 2012 A1
20120199651 Glazer Aug 2012 A1
20120242481 Gernandt et al. Sep 2012 A1
20120243797 Di Venuto Dayer et al. Sep 2012 A1
20120250945 Peng et al. Oct 2012 A1
20130110719 Carter et al. May 2013 A1
20130162394 Etchegoyen Jun 2013 A1
20130202182 Rowe Aug 2013 A1
20130212027 Sharma et al. Aug 2013 A1
20130214164 Zhang et al. Aug 2013 A1
20130236066 Shubinsky et al. Sep 2013 A1
20130277425 Sharma et al. Oct 2013 A1
20130284803 Wood et al. Oct 2013 A1
20140032322 Schwieger et al. Jan 2014 A1
20140140570 Ross et al. May 2014 A1
20140140571 Elmenhurst et al. May 2014 A1
20140184843 Campbell et al. Jul 2014 A1
20140201094 Herrington et al. Jul 2014 A1
20140253711 Balch et al. Sep 2014 A1
20140270341 Elmenhurst et al. Sep 2014 A1
20140314283 Harding Oct 2014 A1
20140355890 Highley Dec 2014 A1
20140380446 Niu et al. Dec 2014 A1
20150043023 Ito Feb 2015 A1
20150058142 Lenahan et al. Feb 2015 A1
20150067346 Ross et al. Mar 2015 A1
20150078629 Gottemukkula et al. Mar 2015 A1
20150086068 Mulhearn et al. Mar 2015 A1
20150110364 Niinuma et al. Apr 2015 A1
20150117701 Ross et al. Apr 2015 A1
20150127430 Hammer May 2015 A1
20150248587 Oami et al. Sep 2015 A1
20150294189 Benhimane et al. Oct 2015 A1
20150309502 Breitgand et al. Oct 2015 A1
20150347815 Dante et al. Dec 2015 A1
20150347833 Robinson et al. Dec 2015 A1
20150371087 Ross et al. Dec 2015 A1
20160034913 Zavarehi et al. Feb 2016 A1
20160034914 Gonen et al. Feb 2016 A1
20160055651 Oami Feb 2016 A1
20160057138 Hoyos et al. Feb 2016 A1
20160072626 Kouladjie Mar 2016 A1
20160117631 McCloskey et al. Apr 2016 A1
20160162734 Ross et al. Jun 2016 A1
20160180485 Avila et al. Jun 2016 A1
20160180546 Kim et al. Jun 2016 A1
20160189510 Hutz Jun 2016 A1
20160203387 Lee et al. Jul 2016 A1
20160283808 Oganezov et al. Sep 2016 A1
20160300094 Lu et al. Oct 2016 A1
20160335520 Ross et al. Nov 2016 A1
20170004444 Krasko et al. Jan 2017 A1
20170024603 Misslin Jan 2017 A1
20170032285 Sharma et al. Feb 2017 A1
20170076132 Sezan et al. Mar 2017 A1
20170132458 Short et al. May 2017 A1
20170153069 Huang et al. Jun 2017 A1
20170220901 Klimovski et al. Aug 2017 A1
20170243230 Ross et al. Aug 2017 A1
20170243231 Withrow et al. Aug 2017 A1
20170243232 Ross et al. Aug 2017 A1
20170243233 Land et al. Aug 2017 A1
20170249491 Macintosh et al. Aug 2017 A1
20170251143 Peruch et al. Aug 2017 A1
20170253069 Kerkar et al. Sep 2017 A1
20170295301 Liu et al. Oct 2017 A1
20170300905 Withrow et al. Oct 2017 A1
20170344823 Withrow et al. Nov 2017 A1
20170344824 Martin Nov 2017 A1
20170372327 Withrow Dec 2017 A1
20180000359 Watanabe Jan 2018 A1
20180012008 Withrow et al. Jan 2018 A1
20180018627 Ross et al. Jan 2018 A1
20180018838 Fankhauser et al. Jan 2018 A1
20180024074 Ranieri et al. Jan 2018 A1
20180024178 House et al. Jan 2018 A1
20180039818 Kim et al. Feb 2018 A1
20180047128 Ross et al. Feb 2018 A1
20180053312 Ross et al. Feb 2018 A1
20180121643 Talwerdi et al. May 2018 A1
20180129860 Krishnapura et al. May 2018 A1
20180129861 Kim et al. May 2018 A1
20180144211 Ross et al. May 2018 A1
20180174586 Zamora Esquivel et al. Jun 2018 A1
20180189582 Lo et al. Jul 2018 A1
20180218505 Kim et al. Aug 2018 A1
20180293370 Kim et al. Oct 2018 A1
20180315058 Withrow et al. Nov 2018 A1
20180341766 Anagnostopoulos Nov 2018 A1
20180341835 Siminoff Nov 2018 A1
20180349694 Ross et al. Dec 2018 A1
20190026581 Leizerson Jan 2019 A1
20190034518 Liu et al. Jan 2019 A1
20190034694 Ross Jan 2019 A1
20190073533 Chen et al. Mar 2019 A1
20190102873 Wang et al. Apr 2019 A1
20190102973 Oyama et al. Apr 2019 A1
20190130082 Alameh et al. May 2019 A1
20190228174 Withrow et al. Jul 2019 A1
20190266373 Hirokawa Aug 2019 A1
20190279017 Graham et al. Sep 2019 A1
20190287118 Ross et al. Sep 2019 A1
20190342102 Hao et al. Nov 2019 A1
20190354822 Pic et al. Nov 2019 A1
20190362186 Irshad et al. Nov 2019 A1
20200065569 Nduka et al. Feb 2020 A1
20200153822 Land et al. May 2020 A1
20200226366 Withrow et al. Jul 2020 A1
20200233901 Crowley et al. Jul 2020 A1
20200250395 Ross et al. Aug 2020 A1
20200257791 Shannon et al. Aug 2020 A1
20200296521 Wexler et al. Sep 2020 A1
20200311404 Derakhshani Oct 2020 A1
20200334689 Withrow Oct 2020 A1
20200349379 Ross Nov 2020 A1
20200356751 Matsuda et al. Nov 2020 A1
20200356772 Withrow et al. Nov 2020 A1
20210012081 Guo et al. Jan 2021 A1
20210049252 Ando et al. Feb 2021 A1
20210158509 Kwak et al. May 2021 A1
20210314316 Ross Oct 2021 A1
20210375291 Zeng et al. Dec 2021 A1
Foreign Referenced Citations (50)
Number Date Country
104519048 Apr 2015 CN
102006005927 Aug 2007 DE
0439669 Aug 1991 EP
0759596 Feb 1997 EP
1016548 Jul 2000 EP
1016549 Jul 2000 EP
1719070 Apr 2009 EP
2107506 Oct 2009 EP
2166493 Mar 2010 EP
2195621 Nov 2013 EP
2866193 Apr 2015 EP
2257909 May 2015 EP
2869240 May 2015 EP
2869241 May 2015 EP
3208744 Aug 2017 EP
3249581 Nov 2017 EP
3267384 Jan 2018 EP
3270342 Jan 2018 EP
2467465 Jun 2014 ES
2097979 Nov 1982 GB
2446837 Aug 2008 GB
2482127 Jan 2012 GB
61234481 Oct 1986 JP
H07192112 Jul 1995 JP
2005321935 Nov 2005 JP
2007213148 Aug 2007 JP
2008021082 Jan 2008 JP
2010146158 Jul 2010 JP
5278978 May 2013 JP
20010016395 Mar 2001 KR
20120009654 Feb 2012 KR
2005086616 Sep 2005 WO
2006038114 Apr 2006 WO
2007028799 Mar 2007 WO
2007031176 Mar 2007 WO
2007071788 Jun 2007 WO
2007090437 Aug 2007 WO
2007144598 Dec 2007 WO
2009030853 Mar 2009 WO
2009089126 Jul 2009 WO
2009115611 Sep 2009 WO
2010018464 Feb 2010 WO
2010018646 Feb 2010 WO
2012145842 Nov 2012 WO
2013051019 Apr 2013 WO
2013126221 Aug 2013 WO
2013173408 Nov 2013 WO
2015004434 Jan 2015 WO
2016081755 May 2016 WO
2016081831 May 2016 WO
Non-Patent Literature Citations (32)
Entry
“Intrinsic Characteristics for Authentication; AlpVision Advances Security Through Digital Technology”, Authentication News, Sep. 2006, vol. 12, No. 9, 3 pages.
Bao et al., “Local Feature based Multiple Object Instance Identification using Scale and Rotation Invariant Implicit Shape Model,” 12th Asian Conference on Computer Vision, Singapore, Singapore, Nov. 1-5, 2014, pp. 600-614.
Beekhof et al., “Secure Surface Identification Codes,” Proceeding of the SPIE 6819: Security Forensics, Steganography, and Watermarking of Multimedia Contents X:68190D, 2008. (12 pages).
Buchanan et al., “Fingerprinting documents and packaging,” Nature 436 (7050): 475, 2005.
Cavoukian et al., “Biometric Encryption: Technology for Strong Authentication, Security and Privacy,” 2008, WE, Intl. Fed. Iot Info Processing, vol. 261; Policies and Research in Identity Management; pp. 57-77.
Di Paola et al., “An Autonomous Mobile Robotic System for Surveillance of Indoor Environments,” International Journal of Advanced Robotic Systems 7(1): 19-26, 2010.
Drew, M. S., et al. “Sharpening from Shadows: Sensor Transforms for Removing Shadows using a Single Image,” Color and Imaging Conference, vol. 5, Society for Imaging Science and Technology, 2009, pp. 267-271.
Ebay, “eBay Launches Must-Have iPhone App RedLaser 3.0” https:/www.ebayinc.com/stories/news/ebay-launches-musthave-iphon-app-redlaser30/, Nov. 18, 2011 (Year: 2011), 8 pages.
Entrupy.com Website History, Wayback Machine https://web.archive.org/web/20160330060808/https://www.entrupy.com/; Mar. 30, 2016 (Year: 2016) 2 pages.
Farid, “Digital Image Forensics,” Lecture notes, exercises and matlab code, CS 89/189, Darmouth College, Hanover, New Hamoshire, USA, 2013, 199 pages.
Fischler et al., “Random Sample Consensus: A Paradigm for Model Fitting with Application Image Analysis and Automated Cartography,” Communication of the ACM 24(6): 381-395, 1981.
Huang et al., “A Novel Binarization Algorithm for Ballistic Imaging Systems,” 3rd International Congress on Image and Signal Processing, Yantai, China, Oct. 16-18, 2010, pp. 1287-1291.
Huang, et al., “An Online Ballistics Imaging System for Firearm Identification”: 2nd International Conference on Signal Processing Systems, Dalian, China, Jul. 5-7, 2010, vol. 2, pp. 68-72.
Kartik et al., “Security System with Face Recognition, SMS Alert and Embedded Network Video Monitoring Terminal,” International Journal of Security, Privacy and Trust Management 2(5):9-19, 2013.
Li, “Image Processing for the Positive Identification of Forensic Ballistics Specimens,” Proceedings of the 6th International Conference of information Fusion, Cairns, Australia, Jul. 8-11, 2003, pp. 1494-1498.
Li, “Firearm Identification System Based on Ballistics Image Processing,” Congress on Image and Signal Processing, School of Computer and Information Science, Faculty of Computing, Health and Science Edith Cowan University, Perth, Australia pp. 149-154.
Maddern et al., “Illumination Invariant Imaging: Applications in Robust Vision-based Localization, Mapping and Classification for Autonomous Vehicles,” IEEE International Conference on Robotics and Automation, Hong Kong, May 31-Jun. 7, 2014, 8 pages.
Matsumoto et al., “Nano-artifact metrics based on random collapse of resist,” Scientific Reports 4:6142, 2014 (5 pages).
Mistry et al., “Comparison of Feature Detection and Matching Approaches: SIFT and SURF,” Global Research and Development Journal for Engineering, vol. 2, Issue 4, Mar. 2017, 8 pages.
NCOA Link at http:/ /ribbs.usps.gov/ncoalink/ncoalink_print.htm; dated May 27, 2009; 3 pages.
Online NCOALink® Processing Acknowledgement Form (PAF) Released by Lorton Data, Jun. 2, 2009, URL=http://us.generation-nt.com/online-ncoalink-processingacknowledgement-form-paf-released-by-press-1567191.html, download date Jun. 25, 2010, 2 pages.
Rublee et al., “ORB: an eddicient alternative to SIFT or SURF,” IEEE International Conference on Computer Vision, Barcelona, Spain, Nov. 6-13, 2011, 8 pages.
Schneider et al., “A Robust Content Based Digital Signature for Image Authentication,” Proceeding of the International Conference on Image Processing Lausanne, Switzerland, Sep. 19, 1996, pp. 227-230.
Sharma et al., “The Fake vs Real Goods Problem: Microscopy and Machine Learning to the Rescue,” KDD 2017 Applied Data Science Paper, Aug. 13-17, 2017, Halifax, NS, Canada, 9 pages.
Shi et al., “Smart Cameras: Fundamentals and Classification,” Chapter 2, Belbachir (ed.). Smart Cameras, Springer, New York, New York, USA 2010, pp. 19-34.
Shields, “How To Shop Savvy With Red Laser,” published online on Mar. 22, 2010; https ://i phone .appstomn .net/reviews/lifesty le/how-to-shop-savy-with-redlaser/, downloaded Mar. 22, 2010, 8 pages).
Smith, “Fireball: A Forensic Ballistic Imaging System: Proceedings of the 31st Annual International Carnahan Conference on Security Technology,” Canberra, Australia, Oct. 15-17, 1997, pp. 64-70.
Takahashi et al., Mass-produced Parts Traceability System Based on Automated Scanning of “Fingerprint of Things,” 15th IAPR international Conference on Machine Vision Applications, Nagoya, Japan, May 8-12, 2017, 5 pages.
United States Postal Service Publication 28 “Postal Addressing Standards”, dated Jul. 2008; text plus Appendix A only; 55 pages.
United States Postal Service, “NCOALink Systems”, http://www.usps.com/ncsc/addressservices/moveupdate/changeaddress.htm, website accessed Jun. 23, 2010, 2 pages.
Veena et al., “Automatic Theft Security System (Smart Surveillance Camera),” Computer Science & Information Technology 3:75-87, 2013.
Woods, “Counterfeit-spotting truth machine launches out of Dumbo,” published online on Feb. 11, 2016, downloaded from http://technically/brooklyn/2016/02/11/entrupy-counterfeit-scanner/ on Mar. 20, 2019, 3 pages.
Related Publications (1)
Number Date Country
20220139106 A1 May 2022 US
Provisional Applications (1)
Number Date Country
62802177 Feb 2019 US
Continuations (2)
Number Date Country
Parent 17189470 Mar 2021 US
Child 17575940 US
Parent 16780882 Feb 2020 US
Child 17189470 US