None.
Augmented reality (“AR”) is the real-time superimposition of artificial images onto real world images. This technology has been used for a variety of applications from handheld entertainment to heads-up displays on military jets. For example, a person using an augmented reality-enabled display device (e.g., a smartphone with an AR app or AR enabled glasses) while shopping can be shown virtual advertisements, sale announcements, pricing information etc. superimposed onto images of actual products they can purchase.
Because AR tends to drive higher levels of attention, one advantageous use of augmented reality is superimposing advertisements and/or purchasing information over product images. People who might ignore traditional electronic advertisements may pay attention to an AR advertisement.
This technology provides systems, methods and techniques for automatically recognizing two-dimensional real world objects with an augmented reality display device, and augmenting or enhancing the display of such real world objects by superimposing virtual images such as a still or video advertisement, a story or other virtual image presentation.
In non-limiting embodiments, the real world object includes visible features including visible security features and a recognition process that takes the visible security features into account when recognizing the object and/or displaying superimposed virtual images.
The printed object may include visible features such as distinctive images of persons, buildings, etc. In the example shown, the printed object further includes an example visible security feature S represented here as a vertical band. The scanned digital copy will include the distinctive image(s) as well as the visible security feature S.
At 20, the resulting digital copy of the printed object is uploaded into a database, and enhanced to provide a selected associated overlay comprising: (a) self-starting video(s) and/or video(s) that starts when you press a button; (b) button(s) that when selected have different interactive functions such as “tell a story”; and (c) interactive connection to an online shopping experience e.g., to show a store finder (see 50). We can change the digital “overlay” quickly if needed (e.g., to change from one advertising campaign to another).
Within that overlay, we determine an “area of recognition” as indicated by a cross-hatched area (30). In one example embodiment, the area of recognition is determined at least in part by the presence, position, dimensions and/or orientation of one or more security features S. For example, the area of recognition may be defined to exclude the security feature S on the printed object. As an example, in the case of a 20 Euro note (see
In another embodiment at 40, at least some security features S are included in the defined area of recognition. The object is recognized only if it includes the security feature(s). If there are visible security features, we can include them into our recognition program. When we upload the digital copy of a print, we can decide which area of the print is used for recognition. See
Some embodiments provide plural overlapping areas of recognition for the same object; one area of recognition may exclude certain security features and another area of recognition includes those security features. The plural different overlapping areas of recognition can be applied sequentially or simultaneously to increase recognition reliability. A voting algorithm can be used to select positive matches.
In example non-limiting embodiments, the database enables real time recognition of an image captured by a user. For example, if the user captures an image of a 20 Euro note, a matching algorithm is used to determine a positive match if the database contains a digital copy of a 20 Euro note. In example non-limiting embodiments, the matching algorithm can include pattern recognition techniques such as described in Corvi et al, Multiresolution image registration, Proceedings., International Conference on Image Processing (IEEE 23-26 October 1995); Hasanuzzaman et al, Robust and effective component-based banknote recognition by SURF features, 20th Annual Wireless and Optical Communications Conference (IEEE 15-16 April 2011); Doush et al, Currency recognition using a smartphone: Comparison between color SIFT and gray scale SIFT algorithms, Journal of King Saud University-Computer and Information Sciences Volume 29, Issue 4, October 2017, Pages 484-492.
As will be understood by those skilled in the art, the database 20 could but need not contain the captured images themselves. For example, in some embodiments, the database might contain compressed or other feature sets used for comparing with captured photos, such feature sets for example comprising a listing of coordinates and associated image features to thereby reduce storage requirements and increase recognition speed. Similarly, when a smart device captures an image, instead of uploading the entire image it may analyze the image and upload a compressed format such as coordinates of pattern features. Since the purpose of recognition in the example non-limiting embodiment is not to determine whether the banknote or other printed item is authentic and genuine, the matching/recognition standards can be significantly relaxed and thus quite different as compared to conventional banknote scanners/recognizers.
Some example embodiments use artificial intelligence and machine learning to perform the matching. The training set consists of images captured by various smartphones and other user devices.
From a user perspective, as shown in
Once the app has been activated to recognize the captured image, the app connects with the database 20 on server 216 via network 214 and checks if this print is recognized (
In one non-limiting example, the app causes the smartphone 202 to show a picture or video by anchoring it to the image currently being captured by the smartphone camera 204 and displayed on the smartphone display 210. See
In more detail, a user can aim the camera of a smartphone or other electronic device at any two-dimensional real world object or a likeness of such an object. One example type of two-dimensional object could be a portable, wallet-sized planar object such as a banknote, drivers license, passport, or other official or unofficial identification document examples of which are shown in
One example such object comprises a banknote such as a US dollar bill, US five dollar bill, US ten dollar bill , US twenty dollar bill (see
Such two-dimensional objects as described above often are protected by any or all of the following visible security features:
Example non-limiting recognition processes as described above can exclude such security features, or may take them into account or use them as part of the recognition process. However, since the purpose of the recognition is not to authenticate the photographed item as being genuine, the recognition/matching algorithm is quite different from ones that are used for banknote or ID authentication. In example non-limiting embodiments for example, it is desirable that matching occurs based on photographing a copy (e.g,. displayed on a tablet screen or the like) and not just an original of a banknote, ID or the like. Thus, the matching will achieve positive results based on counterfeit (inauthentic) banknotes or IDs. However, the matching/recognition is robust in being able to detect different banknote denominations (e.g., 10 euro note versus 20 euro note versus $100 US bill etc.) and thus provide different overlays depending on which banknote is recognized.
In still other embodiments, the two-dimensional object could comprise a political campaign poster, billboard, flyer or other printed material and the overlay could provide a campaign song, slogan, speech, story, video or other content.
In example embodiments herein, the two-dimensional object is not (and does not need to contain) a so-called “AR marker” or 2D bar code and is otherwise not specially designed or intended to be recognized by an augmented reality display device.
Instead of a video,
All printed publications cited above (including the ones shown in
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
This application is a continuation of U.S. patent application Ser. No. 17/228,821, filed Apr. 13, 2021, now U.S. Pat No. ______, which is a continuation of U.S. patent application Ser. No. 16/895,637, filed Jun. 8, 2020, now U.S. Pat. No. 10,997,419; which is a continuation of U.S. patent application Ser. No. 16/565,234, filed Sep. 9, 2019, now U.S. Pat. No. 10,699,124. Each of these applications is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 17228821 | Apr 2021 | US |
Child | 18095866 | US | |
Parent | 16895637 | Jun 2020 | US |
Child | 17228821 | US | |
Parent | 16565234 | Sep 2019 | US |
Child | 16895637 | US |