This disclosure uses identification techniques that uniquely identify digital images to acquire and share annotations of images over a wide area network. More particularly, a “fingerprint” or “signature” value is associated with each image that allows the image to be identified when viewed regardless of that image's URL or website location. Annotations relating to a single occurrence can then be provided with viewers of other occurrences.
The described embodiments use identification techniques on electronic media items to allow the annotations of those media items. The system includes a database where user-created annotations to media items are stored. The database also includes URLs or other address information for the annotated media items, with some items being located at multiple network locations on a wide area network. The database also assigns and stores a fingerprint value for each annotated media item, which can be used to identify the same item when it is accessed at an unknown website or URL.
In the one described embodiment, the database is maintained by a server computer that resides upon the network. The server is further responsible for identifying identical and nearly-identical media items, such as images, that are stored in different locations on the network. The server analyzes images for similarities by using an algorithm or process which is applied to each image in order to create a hash or fingerprint value for each image. This value is then stored in the database. When the same or similar image is accessed from a new URL or website, the same process is applied to this “new” image and a hash or fingerprint value is assigned to it. The server computer is then able to compare the fingerprint value for the new image with the values for images previously analyzed by the server and stored in the database. If the value of the new image meets a threshold similarity value of an existing stored fingerprint value for a matched image, the new image is considered a match by the system. The network location of the new image is then stored in the database as another occurrence of the matched image. Annotations for the matched image that are already stored in the database are then considered applicable for the new image. In this way, annotations applied to one image that is found at various network locations will be stored together and may be applied to new versions of the image as they are accessed and identified by the system.
In certain embodiments, the system for identifying matches between images is based on a hash algorithm, template matching, feature matching, found object identification, facial recognition histogram comparison, or similar value identification and comparison schema such as are known.
Some embodiments include a web browser “plug in” or “extension” which acts to identify images on a web page and communicate with the central server and database that manages and stores annotations and image URL and hash values. In some embodiments the extension applies a hash algorithm to the image, determines the fingerprint value for that image, and then sends the fingerprint value to the central server for comparison. In other embodiments, the extension will send the web address to the central server, and the central server will be responsible for identifying the image through its URL or by applying the hash algorithm in order to apply the comparison process mentioned above. If a match is made, existing annotations associated with the matched image are made available to the user viewing the new image. If additional annotations are made to the currently viewed image, such annotations are sent to and stored on the central database for sharing with other users viewing that image at the same or different network location.
An embodiment of a system 100 for identifying and annotating media content such as digital images is shown in
The server 110 is in communication with a database 120. The database 120 may comprise programming and data found on the same physical computer or computers as the server 110. In this case, the database communications between the server 110 and the database 120 will be entirely within the confines of that physical computer. In other embodiments, the database 120 operates on its own computer (or computers) and provide database services to other, physically separate computing systems, such as server 110. When the database 120 operates on its own computer, the database communication between the server 110 and the database 120 may comprise network communications, and may pass over an external network such as network 130 shown in
In the embodiment shown in
The server 110 is in electronic communication with a network 130. Network 130 can be any form of computer network such as a local-area network (LAN) or a wide-area network (WAN) such as the Internet.
Communicating over that network 130 and with the server 110 are any of a number and variety of end user computing devices 140. Such devices 140 may be personal computers, smart phones, tablets or other electronic devices capable of and configured for electronic interaction with the network 130 and server 110. Operating on these user computing devices 140 are browser applications or apps 142, which constitute software programming to allow a user to view images, text, and other media content materials that are found on source locations 150, 160 over the network 130. Browser apps 142 are designed to allow the user of the user computing device 140 to select various locations on the network 130, such as source A 150 or source B 160, in order to review the media content found at or presented by those locations 150, 160. More particularly, server computers are present at locations 150, 160 in order to server up media content to remote computers that request such content over the network 130. In some embodiments, the source locations 150, 160 are controlled by web server computers (computers operating web server software) that provide media content in response to URL requests from the remote computers. URLs are Internet based addresses that can uniquely identify content on the Internet. Each item of media content, such as image A 170, will be associated with its own, unique URL. Frequently, identical media content is found at multiple network addresses on the network 130. For instance, Image A 170 is shown on
To achieve proper interaction with the server 110, user computing devices 140 will include a specially programmed software program such as a web browser “plug-in” or extension, hereinafter referred to generically as extension 144. The extension 144 interacts with the browser 142, and is designed to monitor media content displayed by the browser 142. The extension 144 provides information about this content to the server 110. The extension 144 is also responsible for receiving annotations (stored in annotation database entities 126) about that content from the server 110 and for presenting those annotations to the user through a user interface created by the extension 144. In some instances, this user interface will be integrated into the user interface provided by the browser 142.
It is possible to combine the browser 142 and the extension 144 into a custom application or app 146 that provides the functions of both elements 142, 144. Effectively, such an app 146 would integrate the functionality of the extension 144 into the core programming of the browser 142. Although the use of a custom application 146 has many advantages, the remainder of this description will assume that the extension 144 is separate from the browser 142 and manages all communications with the server 110.
Note that an individual interaction between the server 110 and the extension 144 will typically involve multiple communications back and forth between these elements. These communications can be made through encrypted or otherwise secured communications pathways, as are well-known in the prior art. The communications can use a communications identifier to identify a single communications stream between the extension 144 and the server 110, which can be maintained for all communications about an image.
In general terms, the system 100 of the present disclosure as shown in
One method 200 for operating this system 100 is shown in
When a user computing device 140 is reviewing material on the network 130, such as the material made available at source A 150, the device 140 will display images and other media content such as Image A 170. When the browser downloads and displays this image A 170, the extension 144 notes image's URL or network location (location URL-A in
When the server 110 receives the location data, it compares this data with the location data 122 already stored in the database 120. This comparison takes place at step 215. If the image's location has already been analyzed by the server 110, its network location will be found in location data 122 and a match will be found. In some embodiments, it is not enough that the network location of the viewed image 170 match a network location 122 stored in the database 120 because it is always possible that the same network location will contain different media content items over time. For instance, the network location “www.website.com/images/front-page.gif” may contain the front page image for a website, and may be changed frequently as the website is updated. As a result, in many embodiments step 215 will check not only the network address, but will also check metadata concerning the image. Some relevant metadata for an image may include, for example, the image's resolution and exact data size. This information would be stored in the location database record 122 when created, and will be transmitted by the extension 144 along with the media network location. If the network location and the stored metadata all match, step 215 can then be considered to have found a match.
If a match is found at step 215, the image record 124 associated with the matched network location 122 will be accessed to retrieve information about the relevant image at step 220. The server 110 then uses the database 120 to identify the relevant annotations in step 225 by determining which of the annotation records 126 are associated with this image record 124.
The server 110 will return the annotations identified in records 126 and any other relevant information found in record 124 to the extension 144 in step 230. The extension 144 can then present this information and the relevant annotations to the user through the user interface of browser 142. This image information may include image occurrence information (URLs of the occurrences of this image stored in records 122) and all annotation found in records 126 that are associated with this image. In some embodiments, the URLs and annotations are not downloaded en masse to the extension 144, but rather the extension 144 is merely made aware of these elements. Metadata may be provided to the extension 144 to allow a user to see that more information is available about this image. When the user requests specific information, the requested information is then downloaded from the server 110.
In response to any user interaction with a displayed media item in the user interface provided by the extension 144 and browser 142 (clicks, taps, scrolling, hovering, etc.), the extension 144 looks up the relevant information that it received from the server 110. If the extension 144 has additional information to display about the item, it can display that information via overlays, popups, mouse-hover-over or tap-and-hold overlays, side-panels, slide-out panels that slide out from under the image, buttons, notification icons, etc. Interacting with those UI elements can provide the user with any additional information that is available, including annotations provided by the annotation database elements 126. This information can also include a list of other pages that contain similar content based on the location database entities 122. Some annotations will have a text-only representation (stories, comments, etc.), and others may include audio and/or video commentaries concerning the media item. It is also possible that the annotations may include links to purchase items relevant to the image, to purchase representations of the image itself, or other suggestions based on the image. Annotations may also include links to other websites which feature the same (or similar) media item.
In addition to displaying existing annotations found in database elements 126, the extension 144 is also capable of received new annotations for the image 170 being viewed. In fact, this “crowd-sourced” ability to gather annotations from a wide-variety of users on the images found on the network 130 is one of the primary advantages of the extension 144. These annotations can take a variety of forms, such as textual, audio, or audio-visual annotations. The annotations can relate to the entire image, or can relate to only a sub-region of the image. For instance, Image A 170 may be an internal image of an ancient Spanish church. A first annotator of the photograph may have created a written annotation for this image, describing the history of this church, and its conversion from a Christian church to an Islamic mosque, and back to a Christian church. A second annotator may have provided an audio commentary on a mosaic that is found in a small portion (or sub-region) of the image. In creating this audio commentary, this person would have specified the sub-region of the image showing the mosaic. The audio commentary would be associated with this sub-region within the annotations database record 126, and an indication of the sub-region may be presented to a later viewer of the image through the extension 144. A third annotator might have created a video annotation showing the interior of the church from the late 1990s. A new viewer of the image can view and examine these annotations through extension 144, even if they are viewing the image on a different website than that which was viewed when the annotations were originally created. This viewer may then elect to comment on a previously created annotation, adding a nuanced correction to the historical description of the church. This new annotation is received by the extension 144 through the browser user interface 142 at step 235, and then reported up to the server 110.
The server 110 will then create a new annotation record 126 in the database 120, and associate this new record with the image record 124 (step 240). This will allow this new annotation to be available for the next viewer of this image, wherever that image may appear on network 130. Since a new annotation may relate to an earlier annotation, the new annotation database record 126 might include a link to the earlier annotation record 126. In some embodiments, the database 120 includes information about users that contribute annotations to the system 100, and each annotation record 126 is linked to a customer record (not shown in
In some embodiments, users that view annotations are encouraged to rank or grade the annotations (such as on a scale from 1-5). The average grade of a user's annotations, and/or the number of annotations created, could be used to assign a grade or level to a user. This information could then be shared each time an annotation of that user is shared. For example, the system 100 could share that a particular annotation was created by the copyright owner of the image (such as the photographer that took the image) or was created by a “5-star” annotator. In some embodiments, an annotator may be requested to self-identify their annotation as a “factual” annotation or an “opinion” annotation (or some other class of annotation). This classification could be stored in the annotation database record 126, and the extension 144 can use these classifications to filter annotations for end user display. End users would then be given the opportunity to object to and correct the author's self-classification to allow crowd-source verification of such classifications.
In other circumstances, it may be useful to link annotation records 126 back to the particular location 122 that was being viewed when the annotation was created. While the primary benefit of the approached described herein is that annotations on a media item 124 apply to any location 122 for that item, tracking the originating location 122 for an annotation 126 may be useful when the annotations are later analyzed and presented. After the annotations are stored in the database 120, the process 200 will then end at step 245.
If step 215 finds that the database 120 does not have a URL record 122 that matches that of network address provided by the extension in step 210, the server 110 then must determine whether this “new” image is in actuality a new image, or merely a new location for a previously identified image. This is accomplished by downloading the image from the provided network address in step 250, and then generating a hash/signature/fingerprint value for the image using an image hashing algorithm in step 255. Image hashing algorithms that are designed to identify identical copies and nearly identical versions of images are known in the prior art. U.S. Pat. Nos. 7,519,200 and 8,782,077 (which are hereby incorporated by reference in their entireties) each describe the use of a hash function to create an image signature of this type to identify duplicate images found on the Internet. An open-source project for the creation of such a hash function is found on pHash.org, which focusses on generating unique hashes for media. Those hashes can be compared using a ‘hamming distance’ to determine how similar the media elements are. The hash works on image files, video files, and audio files, and the same concept could even be applied to text on a page (quotes, stories, etc.).
Once a hash or fingerprint value is generated, it is then compared to other image fingerprint values stored in database 120 within the item information database entities 124 (step 260). The goal of this comparison is to find out whether the newly generated fingerprint value (from step 255) “matches” the hash value found in data entities 124. An exact equality between these two values is not necessary to find a match. For example, a digital GIF, JPEG, or PNG image made at a high-resolution can be re-converted into a similar GIF, JPEG, or PNG image having a different resolution. These two images will create different fingerprint values, but if the correct hash/fingerprint algorithms are used the resulting values will be similar. In other words, they will have a short hamming distance. Similarly, a slightly different crop of the same image may create close, but still different hash values. The test for determining matches at step 255 will reflect this reality and allow slightly different fingerprint values to match and therefore indicate that these slight variations represent the same image.
If a match is found at this step between the hash value of the image identified in step 210 and one of those values stored in the database 122, the server 110 has identified the “new” image as simply a new location for a previously identified image. For example, the server 110 may have previously identified image A at location 172 (URL-B), and then recognized that the image A found at location 170 (URL-A) was identical to this image. If such a match is found and the matching image record is identified (step 265), then the server 110 will create a new location data record 122 in the database 120 and associate this new record 122 with the matching item record 124 (step 270). In one embodiment, this record 122 will include the new URL or network location, the context in which this image or media item was seen (such as the webpage in which the image was integrated and text surrounding the image, which is provided by the extension 144 in step 210), when the image was seen, and metadata related to this image (such as resolution and file size).
In one embodiment, this metadata will also include the hash value generated at step 255, which, as explain above, may be slightly different than the original hash value for the image stored in record 124 even though a match was found in step 260. The storing of hash values in the location records 122 allows the match that takes place at step 260 to include an analysis of the hash values of location records 122 as well as the hash values of the main image records 124. In effect, a new image would then be matched against all instances and variations of the image known by the database 120.
In some embodiments, the hash value comparison at step 260 finds only exact matches in the hash values. These embodiments would misidentify minor modifications to an image as a new image altogether. However, in exchange for this shortcoming, the comparison at step 260 is greatly simplified. There would be no need to determine “hamming” distances, there would be a significantly reduced risk of false matches, and the comparison itself could be accomplished using a simple, binary search tree containing all known hash values in the database 120.
The creation of the new location entity 122 in step 270 means that this instance of the image will be automatically associated with the appropriate image item 124 the next time it is reported by the extension 144 (at step 215), thereby limiting the need to perform the computational intense task of creating the hash value at step 255 and doing the comparison step 260. Once the new location entity 122 is created, the method 200 continues with step 225, with existing annotations and image data for the identified image being transmitted to the extension 144 by the server 110.
In an instance where the server 110 determines that the image 170 is a unique (or, more accurately, is being identified to the server 110/database 120 for the first time because there was no match in step 260), the server 110 will report back to the extension 144 that no match was found. In some cases, the identification of a match in step 260 may not be instantaneous. In these cases, the server 110 may report back to the extension 144 that no match has been found yet. The extension 144 may maintain communication with the server, via a persistent connection such as web sockets (or via polling the server 110, push notifications, or any other means of continuous or repeating communications), to determine if a match is eventually found. If so, processing will continue at step 265. If the server 110 has completed the comparison with all item records 124 (and all location records 122 if they contain hash values), and determined that there is no match, the server 110 will create a new record 124 for the image in database 120 at step 275. This new record 124 will contain the hash/fingerprint value created at step 255 for this image. In addition, the image's URL location will be stored in a new database entity 122 that is associated with this image record 124 (step 280). Since there was not a pre-existing image record 124 in the database for this image, there could not be any existing data or annotations that could be shared with the extension for user consumption. As a result, steps 225 and 230 are skipped, and the method continues at step 235 with the receipt of new annotations from the extension 144.
In the alternative embodiment 300 shown in
Using embodiment 300, it is possible to store annotations to the media item 310 at the server 110. The server 110 again has a processor 112 and communicates with a database 120, as was the case in
In another embodiment, a match between an image identified by the extension 144 and the annotated item records 124 is made through a technique other than a hash on the entire image file. The hash algorithms are usually preferred, as they base the comparison on the entire image and are less likely to create false positive matches. However, other techniques currently exist for finding similar photographs, including histogram comparison (comparing the color-lists and relative percentages of two images), template matching (searching for a specific sub-set of one image within another image), feature matching (identifying key-points in an image such as peaks or curves at different locations and comparing those key points with key points of other images), contour matching (identifying image contours and comparing those contours to other known contours), object matching (machine-learning that identifies objects in images and comparing the found-object-locations of those images with found-object-locations of those objects in other images), and facial-recognition (using facial recognition and the locations of key facial features within the images to find similar images). Each of these techniques could be used in place of the hash algorithms described in connection with
In yet another embodiment, the images themselves are analyzed in order to determine the content of the images. For instance, known object recognition algorithms could be used to identify objects within the image. Pattern recognition and machine learning techniques can further identify image content. The intent of these algorithms is to identify objects or other content elements shown in the images. Once the content is identified, annotations and other elements can be associated with the content items. Annotations made on one content item found within a first image could then be shared with viewers of a different image that contains the same content. The data construct for creating this type of system 400 is shown in
Regardless of which technique is used, the server 110 will subject the media item to object identification algorithm(s) when the item database entity 124 is first created. Objects that are identified will be compared to preexisting object database entities 410 in the database 120. A match will create a new link between the preexisting object entity 410 and the item entity 124. If no match is found for an identified object, a new object database entity 410 can be created in the databases 120 and then linked to the item entity 124. As shown in the crows-foot notation in
When the extension 144 of a user computing device 140 submits a new image to the server 110, the server 110 will be able to identify the image as new in a short time, but the object identification process may take longer. Thus the extension 144 may not be able to show any object annotations 420 immediately upon submission of a new image. But when the object identification process is complete, even a new image may contain existing objects that have already been the subject of an object annotation. These object annotations 420 can then be presented to the extension 144 for sharing with end users. Thus, a photograph of Angkor Wat in Cambodia found on a website may quickly result in relevant annotations even though the photograph and website were previously unknown to the system.
It is possible to implement the above embodiments without using an extension 144 or a custom application 146. To accomplish this, a server-side embeddable widget must be placed on a web page that incorporates and calls programming from a main provide site, much in the say way in which Google's Google Analytics service operates. Any page that includes this widget would be automatically enabled for enhanced-viewing of the annotations 126, 420. By incorporating the functionality on the server-side, this could increase the ability of the present invention to work on mobile devices, as mobile device browsers are less likely to work with extensions.
It is also possible to skip the location based comparison at step 215 in
Finally, it is possible to develop an external interface to the database 120 that would allow direct access to and searching of the database 120. This interface would allow users to input search criteria relating to items, people, places, photos. This search criteria could then be compared with the items 124, objects 410, and annotations 126, 420 within database 120. The database 120 will then return any matching content found within the database (such as annotations 126, 420), as well as links to the locations 122 that contain the related content. This would allow, for instance, users to search for photographs of a particular individual. The annotations and metadata would be searched for that individual, and the URLs associated with matching annotations could be identified and shared with the searching user. Complex searches of images and other media types would become possible that would otherwise be impossible, all while using crowd-sourcing techniques creating the annotations that are used to search the media content.
The many features and advantages of the invention are apparent from the above description. Numerous modifications and variations will readily occur to those skilled in the art. Since such modifications are possible, the invention is not to be limited to the exact construction and operation illustrated and described. Other aspects of the disclosed invention are further described and expounded upon in the following pages.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/408,562, filed on Oct. 14, 2016, which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62408562 | Oct 2016 | US |