Image Annotation Over Different Occurrences of Images Using Image Recognition

Information

  • Patent Application
  • 20180107689
  • Publication Number
    20180107689
  • Date Filed
    October 16, 2017
    7 years ago
  • Date Published
    April 19, 2018
    6 years ago
Abstract
This disclosure relates to a system for acquiring and sharing annotations of images over a network based on fingerprint identification techniques for digital images. The system allows users to annotate images found on the network that are not controlled by the user. Annotations are stored on a database, thereby allowing multiple users to access to all annotations associated with that image. The database includes a “fingerprint value” that is associated with each image, and which allows the image to be identified when viewed, regardless of that image's URL or website location. The complete annotation data for the image being viewed may then be presented to the viewer of the image in a website browser or other viewing application.
Description
FIELD OF THE INVENTION

This disclosure uses identification techniques that uniquely identify digital images to acquire and share annotations of images over a wide area network. More particularly, a “fingerprint” or “signature” value is associated with each image that allows the image to be identified when viewed regardless of that image's URL or website location. Annotations relating to a single occurrence can then be provided with viewers of other occurrences.


SUMMARY

The described embodiments use identification techniques on electronic media items to allow the annotations of those media items. The system includes a database where user-created annotations to media items are stored. The database also includes URLs or other address information for the annotated media items, with some items being located at multiple network locations on a wide area network. The database also assigns and stores a fingerprint value for each annotated media item, which can be used to identify the same item when it is accessed at an unknown website or URL.


In the one described embodiment, the database is maintained by a server computer that resides upon the network. The server is further responsible for identifying identical and nearly-identical media items, such as images, that are stored in different locations on the network. The server analyzes images for similarities by using an algorithm or process which is applied to each image in order to create a hash or fingerprint value for each image. This value is then stored in the database. When the same or similar image is accessed from a new URL or website, the same process is applied to this “new” image and a hash or fingerprint value is assigned to it. The server computer is then able to compare the fingerprint value for the new image with the values for images previously analyzed by the server and stored in the database. If the value of the new image meets a threshold similarity value of an existing stored fingerprint value for a matched image, the new image is considered a match by the system. The network location of the new image is then stored in the database as another occurrence of the matched image. Annotations for the matched image that are already stored in the database are then considered applicable for the new image. In this way, annotations applied to one image that is found at various network locations will be stored together and may be applied to new versions of the image as they are accessed and identified by the system.


In certain embodiments, the system for identifying matches between images is based on a hash algorithm, template matching, feature matching, found object identification, facial recognition histogram comparison, or similar value identification and comparison schema such as are known.


Some embodiments include a web browser “plug in” or “extension” which acts to identify images on a web page and communicate with the central server and database that manages and stores annotations and image URL and hash values. In some embodiments the extension applies a hash algorithm to the image, determines the fingerprint value for that image, and then sends the fingerprint value to the central server for comparison. In other embodiments, the extension will send the web address to the central server, and the central server will be responsible for identifying the image through its URL or by applying the hash algorithm in order to apply the comparison process mentioned above. If a match is made, existing annotations associated with the matched image are made available to the user viewing the new image. If additional annotations are made to the currently viewed image, such annotations are sent to and stored on the central database for sharing with other users viewing that image at the same or different network location.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic illustration of an embodiment of a system that can implement the present invention.



FIG. 2 is a flow chart showing a method for implementing an embodiment of the present invention.



FIG. 3 is an alternative embodiment of the system of FIG. 1.



FIG. 4 is another alternative embodiment of the system of FIG. 1.





DETAILED DESCRIPTION

An embodiment of a system 100 for identifying and annotating media content such as digital images is shown in FIG. 1. A core component of such a system 100 is a server 110. This server 110 can be a single computer with a processor 112, or can be a group of computers (each having a processor 112) that cooperate together to perform the task described herein. As is standard with programmed computers, programming instructions stored on memory devices (not shown in FIG. 1) are used to control the operation of the processor 112. The memory devices may include hard disks or solid state memory devices, which provide long term storage for instructions and data, as well as random access memory (RAM), which is generally transitory memory that is used for data and instructions currently be operated upon by the processor 112.


The server 110 is in communication with a database 120. The database 120 may comprise programming and data found on the same physical computer or computers as the server 110. In this case, the database communications between the server 110 and the database 120 will be entirely within the confines of that physical computer. In other embodiments, the database 120 operates on its own computer (or computers) and provide database services to other, physically separate computing systems, such as server 110. When the database 120 operates on its own computer, the database communication between the server 110 and the database 120 may comprise network communications, and may pass over an external network such as network 130 shown in FIG. 1.


In the embodiment shown in FIG. 1, the database 120 includes defined database entities for locations 122, items 124, and annotations 126. In one embodiment, these database entities 122, 124, 126 constitute database tables in a relational database. In other embodiments, these entities 122, 124, 126 constitute database objects or any other type of database entity usable with a computerized database. In the present embodiment, the phrase database record and database entity are used interchangeable to refer to data records in a database whether comprising a row in a database table, an instantiation of a database object, or any other populated database entity. Note that these entities 122, 124, 126 are connected using crow-foot lines to indicate the one-to-many relationships between these entities 122, 124, 126 that are maintained by the database 120.


The server 110 is in electronic communication with a network 130. Network 130 can be any form of computer network such as a local-area network (LAN) or a wide-area network (WAN) such as the Internet.


Communicating over that network 130 and with the server 110 are any of a number and variety of end user computing devices 140. Such devices 140 may be personal computers, smart phones, tablets or other electronic devices capable of and configured for electronic interaction with the network 130 and server 110. Operating on these user computing devices 140 are browser applications or apps 142, which constitute software programming to allow a user to view images, text, and other media content materials that are found on source locations 150, 160 over the network 130. Browser apps 142 are designed to allow the user of the user computing device 140 to select various locations on the network 130, such as source A 150 or source B 160, in order to review the media content found at or presented by those locations 150, 160. More particularly, server computers are present at locations 150, 160 in order to server up media content to remote computers that request such content over the network 130. In some embodiments, the source locations 150, 160 are controlled by web server computers (computers operating web server software) that provide media content in response to URL requests from the remote computers. URLs are Internet based addresses that can uniquely identify content on the Internet. Each item of media content, such as image A 170, will be associated with its own, unique URL. Frequently, identical media content is found at multiple network addresses on the network 130. For instance, Image A 170 is shown on FIG. 1 associated with source 150, but it is also shown as figure number 172 being associated with source 160. As shown in FIG. 1, when stored in connection with Source A 150, Image A 170 has a URL address of URL-A, while the same image 172 at source B 160 has a different URL (URL-B).


To achieve proper interaction with the server 110, user computing devices 140 will include a specially programmed software program such as a web browser “plug-in” or extension, hereinafter referred to generically as extension 144. The extension 144 interacts with the browser 142, and is designed to monitor media content displayed by the browser 142. The extension 144 provides information about this content to the server 110. The extension 144 is also responsible for receiving annotations (stored in annotation database entities 126) about that content from the server 110 and for presenting those annotations to the user through a user interface created by the extension 144. In some instances, this user interface will be integrated into the user interface provided by the browser 142.


It is possible to combine the browser 142 and the extension 144 into a custom application or app 146 that provides the functions of both elements 142, 144. Effectively, such an app 146 would integrate the functionality of the extension 144 into the core programming of the browser 142. Although the use of a custom application 146 has many advantages, the remainder of this description will assume that the extension 144 is separate from the browser 142 and manages all communications with the server 110.


Note that an individual interaction between the server 110 and the extension 144 will typically involve multiple communications back and forth between these elements. These communications can be made through encrypted or otherwise secured communications pathways, as are well-known in the prior art. The communications can use a communications identifier to identify a single communications stream between the extension 144 and the server 110, which can be maintained for all communications about an image.


In general terms, the system 100 of the present disclosure as shown in FIG. 1 identifies media content, such as a digital image A 170, that a user may encounter while browsing network locations. System 100 then aids in determining if that media content has been annotated (by the current user or by any other user) previously. The system 100 achieves this by comparing data associated with this occurrence of the image 170 to that of saved data on the server 110 to determine if that media item (such as image A 170) is identical to, or nearly identical to, image occurrences known to the database 120. If so, the server 110 communicates information stored in the database 120 about that image 170 to the extension 144. In particular, the server 110 can provide annotations (stored in database entities 126) made about that image 170 regardless of where the viewed occurrence is located on the network and regardless of where that image 170 was being viewed when it was previously annotated.


One method 200 for operating this system 100 is shown in FIG. 2. The method 200 begins at step 205, with the creation of the database 120. In this case, the term “creation” simply means that the database 120 is programmed and is ready to receive data. The actual data will be input into the database 120 through the rest of the steps of method 200. As explained above, the database 120 is constructed so that a media item database record 124 is created for each image or other item managed by the database 120. Each separately identified image, such as image A 170/172, will preferably have only a single record 124 in the database. This record 124 will contain the fingerprint (or “hash value” or “signature”) for the image that is used to identify identical and extremely similar images on the network 130. A single image or other item identified through record 124 may have multiple copies/instances found on the network 130, each at a separate network location, thereby resulting in multiple location data records 122 for that image record 124. In addition, each image record 124 may have multiple annotations records 126, with each such record 126 containing a separate written, audio, or multi-media annotation for the related media item. The annotation records 126 may also contain information about the user that created the annotation (such as the name or type of author of the annotation) and metadata about the annotation (such as when it was created and whether and how the annotation was edited by the author). The item record 124 itself also contain additional metadata about the image or about the image information existing in the database 120. For instance, this metadata may provide a count of the number of locations records 122 identified for the image record 124, or the number of separate annotations 126 that have been collected. The meta data may also include information about the context where the image was originally seen. This content could be provided by the extension 144, and may include a webpage and website that incorporated the image, or the text that was found on a webpage that surrounded the image.


When a user computing device 140 is reviewing material on the network 130, such as the material made available at source A 150, the device 140 will display images and other media content such as Image A 170. When the browser downloads and displays this image A 170, the extension 144 notes image's URL or network location (location URL-A in FIG. 1) and then sends that location to the server 110. This occurs in step 210 in method 200. In one embodiment, the extension 144 analyzes all images that are being displayed by the browser 142 when they are downloaded from the source location 150, 160, and then sends the network location for all images being displayed up to the server 110 for processing. In another embodiment, the extension 144 provides a user interface (such as a pop-up menu item or a button or other GUI interface device) through which a user can request information about the image or images being displayed. Only when the user explicitly makes this request does the extension 144 determine the image's network address and transmit this address to the server 110. When the browser 142 is viewing a webpage, the extension 144 can identify images in the web page by identifying the source for an <IMG> image tag, as well as related attributes and CSS tags that identify images that will be displayed on a screen (such as background image tags and related CSS definitions). The extension 144 can identify the tags when the web page is first loaded, and can also monitor the browser 142 for additional content, as some content may be dynamically loaded on the webpage based on user interaction with the content.


When the server 110 receives the location data, it compares this data with the location data 122 already stored in the database 120. This comparison takes place at step 215. If the image's location has already been analyzed by the server 110, its network location will be found in location data 122 and a match will be found. In some embodiments, it is not enough that the network location of the viewed image 170 match a network location 122 stored in the database 120 because it is always possible that the same network location will contain different media content items over time. For instance, the network location “www.website.com/images/front-page.gif” may contain the front page image for a website, and may be changed frequently as the website is updated. As a result, in many embodiments step 215 will check not only the network address, but will also check metadata concerning the image. Some relevant metadata for an image may include, for example, the image's resolution and exact data size. This information would be stored in the location database record 122 when created, and will be transmitted by the extension 144 along with the media network location. If the network location and the stored metadata all match, step 215 can then be considered to have found a match.


If a match is found at step 215, the image record 124 associated with the matched network location 122 will be accessed to retrieve information about the relevant image at step 220. The server 110 then uses the database 120 to identify the relevant annotations in step 225 by determining which of the annotation records 126 are associated with this image record 124.


The server 110 will return the annotations identified in records 126 and any other relevant information found in record 124 to the extension 144 in step 230. The extension 144 can then present this information and the relevant annotations to the user through the user interface of browser 142. This image information may include image occurrence information (URLs of the occurrences of this image stored in records 122) and all annotation found in records 126 that are associated with this image. In some embodiments, the URLs and annotations are not downloaded en masse to the extension 144, but rather the extension 144 is merely made aware of these elements. Metadata may be provided to the extension 144 to allow a user to see that more information is available about this image. When the user requests specific information, the requested information is then downloaded from the server 110.


In response to any user interaction with a displayed media item in the user interface provided by the extension 144 and browser 142 (clicks, taps, scrolling, hovering, etc.), the extension 144 looks up the relevant information that it received from the server 110. If the extension 144 has additional information to display about the item, it can display that information via overlays, popups, mouse-hover-over or tap-and-hold overlays, side-panels, slide-out panels that slide out from under the image, buttons, notification icons, etc. Interacting with those UI elements can provide the user with any additional information that is available, including annotations provided by the annotation database elements 126. This information can also include a list of other pages that contain similar content based on the location database entities 122. Some annotations will have a text-only representation (stories, comments, etc.), and others may include audio and/or video commentaries concerning the media item. It is also possible that the annotations may include links to purchase items relevant to the image, to purchase representations of the image itself, or other suggestions based on the image. Annotations may also include links to other websites which feature the same (or similar) media item.


In addition to displaying existing annotations found in database elements 126, the extension 144 is also capable of received new annotations for the image 170 being viewed. In fact, this “crowd-sourced” ability to gather annotations from a wide-variety of users on the images found on the network 130 is one of the primary advantages of the extension 144. These annotations can take a variety of forms, such as textual, audio, or audio-visual annotations. The annotations can relate to the entire image, or can relate to only a sub-region of the image. For instance, Image A 170 may be an internal image of an ancient Spanish church. A first annotator of the photograph may have created a written annotation for this image, describing the history of this church, and its conversion from a Christian church to an Islamic mosque, and back to a Christian church. A second annotator may have provided an audio commentary on a mosaic that is found in a small portion (or sub-region) of the image. In creating this audio commentary, this person would have specified the sub-region of the image showing the mosaic. The audio commentary would be associated with this sub-region within the annotations database record 126, and an indication of the sub-region may be presented to a later viewer of the image through the extension 144. A third annotator might have created a video annotation showing the interior of the church from the late 1990s. A new viewer of the image can view and examine these annotations through extension 144, even if they are viewing the image on a different website than that which was viewed when the annotations were originally created. This viewer may then elect to comment on a previously created annotation, adding a nuanced correction to the historical description of the church. This new annotation is received by the extension 144 through the browser user interface 142 at step 235, and then reported up to the server 110.


The server 110 will then create a new annotation record 126 in the database 120, and associate this new record with the image record 124 (step 240). This will allow this new annotation to be available for the next viewer of this image, wherever that image may appear on network 130. Since a new annotation may relate to an earlier annotation, the new annotation database record 126 might include a link to the earlier annotation record 126. In some embodiments, the database 120 includes information about users that contribute annotations to the system 100, and each annotation record 126 is linked to a customer record (not shown in FIG. 1). The user record could contain the user's name and age, and publicly displayed user name, their location, their submission history, their rank or status among users, a user type (anonymous, administrator, the website creator for an instance of the image, an image copyright owner, an advertiser, etc.), and their access rights or privileges to the rest of the system 100. In some cases, the user type (such as the copyright owner type) will need to be subject to some type of validation. Business-rules for annotations could be customized based on the user types. For example, copyright owners could specify custom fields describing their images in data record 124, such as licensing info, links to their other work, etc. Advertisers and vendors could add links to places to purchase items in the image, allow people to purchase directly from the image, show other models of the items, etc.


In some embodiments, users that view annotations are encouraged to rank or grade the annotations (such as on a scale from 1-5). The average grade of a user's annotations, and/or the number of annotations created, could be used to assign a grade or level to a user. This information could then be shared each time an annotation of that user is shared. For example, the system 100 could share that a particular annotation was created by the copyright owner of the image (such as the photographer that took the image) or was created by a “5-star” annotator. In some embodiments, an annotator may be requested to self-identify their annotation as a “factual” annotation or an “opinion” annotation (or some other class of annotation). This classification could be stored in the annotation database record 126, and the extension 144 can use these classifications to filter annotations for end user display. End users would then be given the opportunity to object to and correct the author's self-classification to allow crowd-source verification of such classifications.


In other circumstances, it may be useful to link annotation records 126 back to the particular location 122 that was being viewed when the annotation was created. While the primary benefit of the approached described herein is that annotations on a media item 124 apply to any location 122 for that item, tracking the originating location 122 for an annotation 126 may be useful when the annotations are later analyzed and presented. After the annotations are stored in the database 120, the process 200 will then end at step 245.


If step 215 finds that the database 120 does not have a URL record 122 that matches that of network address provided by the extension in step 210, the server 110 then must determine whether this “new” image is in actuality a new image, or merely a new location for a previously identified image. This is accomplished by downloading the image from the provided network address in step 250, and then generating a hash/signature/fingerprint value for the image using an image hashing algorithm in step 255. Image hashing algorithms that are designed to identify identical copies and nearly identical versions of images are known in the prior art. U.S. Pat. Nos. 7,519,200 and 8,782,077 (which are hereby incorporated by reference in their entireties) each describe the use of a hash function to create an image signature of this type to identify duplicate images found on the Internet. An open-source project for the creation of such a hash function is found on pHash.org, which focusses on generating unique hashes for media. Those hashes can be compared using a ‘hamming distance’ to determine how similar the media elements are. The hash works on image files, video files, and audio files, and the same concept could even be applied to text on a page (quotes, stories, etc.).


Once a hash or fingerprint value is generated, it is then compared to other image fingerprint values stored in database 120 within the item information database entities 124 (step 260). The goal of this comparison is to find out whether the newly generated fingerprint value (from step 255) “matches” the hash value found in data entities 124. An exact equality between these two values is not necessary to find a match. For example, a digital GIF, JPEG, or PNG image made at a high-resolution can be re-converted into a similar GIF, JPEG, or PNG image having a different resolution. These two images will create different fingerprint values, but if the correct hash/fingerprint algorithms are used the resulting values will be similar. In other words, they will have a short hamming distance. Similarly, a slightly different crop of the same image may create close, but still different hash values. The test for determining matches at step 255 will reflect this reality and allow slightly different fingerprint values to match and therefore indicate that these slight variations represent the same image.


If a match is found at this step between the hash value of the image identified in step 210 and one of those values stored in the database 122, the server 110 has identified the “new” image as simply a new location for a previously identified image. For example, the server 110 may have previously identified image A at location 172 (URL-B), and then recognized that the image A found at location 170 (URL-A) was identical to this image. If such a match is found and the matching image record is identified (step 265), then the server 110 will create a new location data record 122 in the database 120 and associate this new record 122 with the matching item record 124 (step 270). In one embodiment, this record 122 will include the new URL or network location, the context in which this image or media item was seen (such as the webpage in which the image was integrated and text surrounding the image, which is provided by the extension 144 in step 210), when the image was seen, and metadata related to this image (such as resolution and file size).


In one embodiment, this metadata will also include the hash value generated at step 255, which, as explain above, may be slightly different than the original hash value for the image stored in record 124 even though a match was found in step 260. The storing of hash values in the location records 122 allows the match that takes place at step 260 to include an analysis of the hash values of location records 122 as well as the hash values of the main image records 124. In effect, a new image would then be matched against all instances and variations of the image known by the database 120.


In some embodiments, the hash value comparison at step 260 finds only exact matches in the hash values. These embodiments would misidentify minor modifications to an image as a new image altogether. However, in exchange for this shortcoming, the comparison at step 260 is greatly simplified. There would be no need to determine “hamming” distances, there would be a significantly reduced risk of false matches, and the comparison itself could be accomplished using a simple, binary search tree containing all known hash values in the database 120.


The creation of the new location entity 122 in step 270 means that this instance of the image will be automatically associated with the appropriate image item 124 the next time it is reported by the extension 144 (at step 215), thereby limiting the need to perform the computational intense task of creating the hash value at step 255 and doing the comparison step 260. Once the new location entity 122 is created, the method 200 continues with step 225, with existing annotations and image data for the identified image being transmitted to the extension 144 by the server 110.


In an instance where the server 110 determines that the image 170 is a unique (or, more accurately, is being identified to the server 110/database 120 for the first time because there was no match in step 260), the server 110 will report back to the extension 144 that no match was found. In some cases, the identification of a match in step 260 may not be instantaneous. In these cases, the server 110 may report back to the extension 144 that no match has been found yet. The extension 144 may maintain communication with the server, via a persistent connection such as web sockets (or via polling the server 110, push notifications, or any other means of continuous or repeating communications), to determine if a match is eventually found. If so, processing will continue at step 265. If the server 110 has completed the comparison with all item records 124 (and all location records 122 if they contain hash values), and determined that there is no match, the server 110 will create a new record 124 for the image in database 120 at step 275. This new record 124 will contain the hash/fingerprint value created at step 255 for this image. In addition, the image's URL location will be stored in a new database entity 122 that is associated with this image record 124 (step 280). Since there was not a pre-existing image record 124 in the database for this image, there could not be any existing data or annotations that could be shared with the extension for user consumption. As a result, steps 225 and 230 are skipped, and the method continues at step 235 with the receipt of new annotations from the extension 144.


Alternative Embodiments

In the alternative embodiment 300 shown in FIG. 3, annotations are created and presented for a media item 310 that can be uniquely identified through an identifier (ID) number so that it is not necessary to use hash algorithms (such as those applied in step 255) to identify multiple occurrences of this item 310. For instance, video stored on a common video server or service (such as the YouTube video service provided by Google Inc. of Mountain View, Calif.) is typically associated with a video identifier. Code 322 can be inserted into web pages 320, 330 that “embeds” the video 310 into the pages 320, 330 by merely identifying the video 310 through its identifier. The same video identifier can be used to embed the same video on hundreds of websites. Similarly, social media content (such as Tweets and Facebook posts) can be embedded based on a similar identifier that uniquely identifies the content.


Using embodiment 300, it is possible to store annotations to the media item 310 at the server 110. The server 110 again has a processor 112 and communicates with a database 120, as was the case in FIG. 1. In this case, however, the item record 310 does not contain a hash value for comparison purposes, but merely contains the identifier of the media item 310. The item record 310 again connects to a plurality of annotation database entities 126. The user computing devices 140 have a browser 142 and an extension 144 that monitors the actions of the browser 142 and communicates with the server 110 in order to provide annotations for the media items 310. When the extension 144 identifies a media item 310 (e.g., a video or social media post) that may be annotated, the identifier for that media item 310 is sent to the server 110, which then determines whether that identifier is found in any current item records 310. If so, annotations 126 for that media item 310 are provided to the user computing device 140. The extension 144 also gives the user the opportunity to create a new annotation to that media item 310. This annotation is communicated through the network 130 to the server 110, and then stored in the database 120 as a new annotation record 126. The method for providing this functionality is much the same as the method 200 described above, with the hash generation and comparison functions being replaced with the steps of transmitting the media item ID to the server for matching with the item record 310. In FIG. 3, no location database entities are shown in database 120. This is because it is not necessary to use network location to help identify media items 310, as the media item identifier provides a unique identification mechanism. It may, nonetheless, prove useful to track all known locations for the embedded media item, and to identify which location is associated with each provided annotation, as was described above.


In another embodiment, a match between an image identified by the extension 144 and the annotated item records 124 is made through a technique other than a hash on the entire image file. The hash algorithms are usually preferred, as they base the comparison on the entire image and are less likely to create false positive matches. However, other techniques currently exist for finding similar photographs, including histogram comparison (comparing the color-lists and relative percentages of two images), template matching (searching for a specific sub-set of one image within another image), feature matching (identifying key-points in an image such as peaks or curves at different locations and comparing those key points with key points of other images), contour matching (identifying image contours and comparing those contours to other known contours), object matching (machine-learning that identifies objects in images and comparing the found-object-locations of those images with found-object-locations of those objects in other images), and facial-recognition (using facial recognition and the locations of key facial features within the images to find similar images). Each of these techniques could be used in place of the hash algorithms described in connection with FIGS. 1 and 2. While at the current time these alternatives would appear to provide less precession than the hash values, this can change as these alternatives are improving over time with additional research and effort.


In yet another embodiment, the images themselves are analyzed in order to determine the content of the images. For instance, known object recognition algorithms could be used to identify objects within the image. Pattern recognition and machine learning techniques can further identify image content. The intent of these algorithms is to identify objects or other content elements shown in the images. Once the content is identified, annotations and other elements can be associated with the content items. Annotations made on one content item found within a first image could then be shared with viewers of a different image that contains the same content. The data construct for creating this type of system 400 is shown in FIG. 4. As can be seen in that Figure, the database 120 contains object database entities 410 that contain information about the objects found in the images represented by item elements 124. The process for identifying these object entities 410 may be quite time intensive, and are probably best performed by the server 110 after a media item is first identified and placed into a new item database entity 124. In some embodiments, the server 110 itself will perform the object identification algorithms on its own processor 112, but it is equally likely that an external service provided over the network 130 will better be able to efficiently identify objects in a particular media item/image.


Regardless of which technique is used, the server 110 will subject the media item to object identification algorithm(s) when the item database entity 124 is first created. Objects that are identified will be compared to preexisting object database entities 410 in the database 120. A match will create a new link between the preexisting object entity 410 and the item entity 124. If no match is found for an identified object, a new object database entity 410 can be created in the databases 120 and then linked to the item entity 124. As shown in the crows-foot notation in FIG. 4, media items 124 and objects 410 are linked in a many-to-many relationship, which means that multiple objects might be found in a single image (or other media item), and a single object might be found in multiple images. When the user is given the opportunity to annotate a media item in step 235, they are also given the opportunity to annotate a particular object that has been identified in that media item. Such object annotations will be uploaded to the server and stored as database entities 420. These annotations are associated directly with an object entity 410, and therefore will be share with anyone viewing an image (or other media item) that contains the same object (as indicated by the relationship between the relevant object entity 410 and the item entity 124).


When the extension 144 of a user computing device 140 submits a new image to the server 110, the server 110 will be able to identify the image as new in a short time, but the object identification process may take longer. Thus the extension 144 may not be able to show any object annotations 420 immediately upon submission of a new image. But when the object identification process is complete, even a new image may contain existing objects that have already been the subject of an object annotation. These object annotations 420 can then be presented to the extension 144 for sharing with end users. Thus, a photograph of Angkor Wat in Cambodia found on a website may quickly result in relevant annotations even though the photograph and website were previously unknown to the system.


It is possible to implement the above embodiments without using an extension 144 or a custom application 146. To accomplish this, a server-side embeddable widget must be placed on a web page that incorporates and calls programming from a main provide site, much in the say way in which Google's Google Analytics service operates. Any page that includes this widget would be automatically enabled for enhanced-viewing of the annotations 126, 420. By incorporating the functionality on the server-side, this could increase the ability of the present invention to work on mobile devices, as mobile device browsers are less likely to work with extensions.


It is also possible to skip the location based comparison at step 215 in FIG. 2. Instead, viewed media content items would be compared to items in the database 120 using only the fingerprint/hash comparison of step 260. In this case, the hash value could even be created by the extension 144 on the user computing device 140 and then submitted directly to the server 110, which would reduce the workload of the server processor 112.


Finally, it is possible to develop an external interface to the database 120 that would allow direct access to and searching of the database 120. This interface would allow users to input search criteria relating to items, people, places, photos. This search criteria could then be compared with the items 124, objects 410, and annotations 126, 420 within database 120. The database 120 will then return any matching content found within the database (such as annotations 126, 420), as well as links to the locations 122 that contain the related content. This would allow, for instance, users to search for photographs of a particular individual. The annotations and metadata would be searched for that individual, and the URLs associated with matching annotations could be identified and shared with the searching user. Complex searches of images and other media types would become possible that would otherwise be impossible, all while using crowd-sourcing techniques creating the annotations that are used to search the media content.


The many features and advantages of the invention are apparent from the above description. Numerous modifications and variations will readily occur to those skilled in the art. Since such modifications are possible, the invention is not to be limited to the exact construction and operation illustrated and described. Other aspects of the disclosed invention are further described and expounded upon in the following pages.

Claims
  • 1. A method comprising: a) constructing a database having: i) item records each identifying a media item accessible on a network and containing a fingerprint value for the media item,ii) network location records associated with the item records, each network location record containing a location on the network where the media item identified in the associated item record is found;iii) annotation records associated with the item records, each annotation record containing an annotation concerning the media item identified in the associated item record;b) at a server computer, receiving from a first remote application a first network location for a first media item found on the network;c) at the server computer, comparing a first fingerprint value for the first media item against the fingerprint values in the item records to find a matching item record;d) at the server computer, creating a first network location record containing the first network location and associating the first network location record with the matching item record;e) at the server computer, receiving from the first remote application a first annotation for the first media item; andf) at the server computer, creating a first annotation record containing the first annotation and associating the first annotation record with the matching item record.
  • 2. The method of claim 1, wherein the first fingerprint value for the first media item is created by applying a hash algorithm to the first media item.
  • 3. The method of claim 2, wherein the server computer downloads a copy of the first media item from the first network location before the hash algorithm is applied to the copy of the first media item.
  • 4. The method of claim 3, wherein the server computer uses its own processor to apply the hash algorithm to the copy of the first media item.
  • 5. The method of claim 2, wherein the first fingerprint value for the first media item is stored in the first network location record.
  • 6. The method of claim 5, further comprising: g) at the server computer, receiving from a second remote application a second network location relating to an unknown media item;h) at the server computer, comparing a second fingerprint value for the unknown media item against the fingerprint values in the database to find that the unknown media item matches the matching item record for the first media item;i) at the server computer, creating a second network location record containing the second network location and associating the second network location record with the matching item record; andj) at the server computer, transmitting to the second remote application the first annotation record.
  • 7. The method of claim 6, wherein the first network location record contains the first fingerprint value, and further wherein the comparing step h) compares the second fingerprint value against the first fingerprint value in the first network location record.
  • 8. The method of claim 6, further comprising: k) at the server computer, receiving from the second remote application a second annotation for the first media item created when the first media item is viewed at the second network location; andl) at the server computer, creating a second annotation record containing the second annotation and associating the second annotation record with the matching item record.
  • 9. The method of claim 8, wherein the first annotation is textual, and the second annotation comprises audio commentary.
  • 10. The method of claim 9, further comprising receiving from a third remote application a request for media item annotation, and responding to the request by transmitting the first annotation and the second annotation.
  • 11. The method of claim 10, wherein the request for media item annotation includes a requested network location, further comparing the requested network location against the network location records to find a match between the requested network location and one of the first network location and the second network location.
  • 12. The method of claim 2, wherein the first media item is an image.
  • 13. The method of claim 2, wherein fingerprint values are considered to match even when they are not identical.
  • 14. The method of claim 13, wherein matches are determined in part by hamming distances between hash values.
  • 15. The method of claim 6, further comprising: k) at the server computer, receiving a request to query the database from a direct-access interface, the query request containing a query string;l) at the server computer, searching the database to find instances of the query string in relevant records of the data base, the relevant records chosen from the set of record comprising item records, network location records, and annotation records; andm) at the server computer, using the relevant records to identify a network location to at least one relevant media item; andn) at the server computer, returning data from the relevant records and links to the at least one relevant media item.
  • 16. An server apparatus comprising: a) database communications to a database, the database having: i) item records each identifying a media item accessible on a network and containing a fingerprint value for the media item,ii) network location records associated with the item records, each network location record containing a location on the network where the media item identified in the associated item record is found;iii) annotation records associated with the item records, each annotation record containing an annotation concerning the media item identified in the associated item record;b) a computer processor operating under programmed control of programming instructions;c) a memory device containing the programming instructions; andd) the programming instructions on the memory device, operably by the processor to perform the following functions: i) receive from a first remote application a first network location for a first media item found on the network;ii) compare a first fingerprint value for the first media item against the fingerprint values in the item records to find a matching item record;iii) create a first network location record containing the first network location and associate the first network location record with the matching item record;iv) receive from the first remote application a first annotation for the first media item; andv) create a first annotation record containing the first annotation and associate the first annotation record with the matching item record.
RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/408,562, filed on Oct. 14, 2016, which is hereby incorporated by reference.

Provisional Applications (1)
Number Date Country
62408562 Oct 2016 US