The subject matter described herein relates to information technology, and, more particularly, to techniques for detecting the theft of images or other inappropriate or unauthorized image use.
There are a variety of instances of the unauthorized use of images. For example, to gather material for creation of so-called sock puppet accounts (on-line identities used for inappropriate purposes such as deception), scripts harvest images from real user's profiles. Other reasons for unauthorized use of images include cyber-bullying, impersonation, and the like. The unauthorized use of an image can cause harm to the innocent owner of the image.
Described herein are techniques for image theft detection. In one aspect, an exemplary method of monitoring for inappropriate use of a digital visual medium includes the steps of obtaining, at a social network server, a request from a first user to upload a digital visual medium; marking the digital visual medium; obtaining, at the social network server, a request from a second user to upload the digital visual medium; and, based on the marking, determining whether the request from the second user to upload the digital visual medium is inappropriate.
In another aspect, another exemplary method of monitoring for inappropriate use of a digital visual medium includes the steps of identifying a social network account as potentially illegitimate; responsive to the identifying, generating at least one facial recognition score for at least one image of a face in at least one digital visual medium associated with the potentially illegitimate social network account; comparing the at least one facial recognition score to a database of facial recognition scores for legitimate users of the social network; and initiating remedial action if the comparing indicates a potential illegitimate use of the digital visual medium associated with the potentially illegitimate social network account.
As used herein, “facilitating” an action includes performing the action, making the action easier, helping to carry the action out, or causing the action to be performed. Thus, by way of example and not limitation, instructions executing on one processor might facilitate an action carried out by instructions executing on a remote processor, by sending appropriate data or commands to cause or aid the action to be performed. For the avoidance of doubt, where an actor facilitates an action by other than performing the action, the action is nevertheless performed by some entity or combination of entities.
One or more embodiments described herein or elements thereof can be implemented in the form of an article of manufacture including a machine readable medium that contains one or more programs which when executed implement such step(s); that is to say, a computer program product including a tangible computer readable recordable storage medium (or multiple such media) with computer usable program code for performing the method steps indicated. Furthermore, one or more embodiments described herein or elements thereof can be implemented in the form of an apparatus, such as a social network server apparatus, including a memory and at least one processor that is coupled to the memory and operative to perform, or facilitate performance of, exemplary method steps. Yet further, in another aspect, one or more embodiments described herein or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) specialized hardware module(s), (ii) software module(s) stored in a tangible computer-readable recordable storage medium (or multiple such media) and implemented on a hardware processor, or (iii) a combination of (i) and (ii); any of (i)-(iii) implement the specific techniques set forth herein.
Embodiments described herein can provide substantial beneficial technical effects. For example, one or more embodiments may provide one or more of the following advantages:
These and other features and advantages will become apparent from the following detailed description of illustrative embodiments, which is to be read in connection with the accompanying drawings.
The client devices 102a-n shown each include a computer-readable medium, such as a random access memory (RAM) 108 coupled to a processor 110. The processor 110 executes computer-executable program instructions stored in memory 108. Such processors may include, for example, microprocessors, ASICs, and state machines. Such processors include and/or are in communication with, media, for example computer-readable media, storing instructions that, when executed by the processor, cause the processor to perform the steps described herein. Computer-readable media are discussed further below in connection with
Client devices 102a-n may also include a number of external or internal devices such as a mouse, a CD-ROM, DVD, a keyboard, a display, or other input or output devices. Examples of client devices 102a-n are personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones, pagers, digital tablets, laptop computers, Internet appliances, and other processor-based devices. In general, a client device 102a may be any type of processor-based platform that is connected to a network 106 and that interacts with one or more application programs. Client devices 102a-n may operate on any operating system capable of supporting a browser or browser-enabled application, such as the Microsoft® Windows® operating system (registered marks of Microsoft Corporation, Redmond, Wash., US), the Linux® operating system (registered mark of Linus Torvalds, Portland, Oreg. US), the iOS operating system used by Apple Inc. or the OS X® operating system (registered mark of Apple Inc., Cupertino, Calif. US). The client devices 102a-n shown include, for example, personal computers executing a browser application program such as the Google Chrome™ browser (mark of Google Inc., Mountain View, Calif., US), Microsoft Corporation's Internet Explorer® browser (registered mark of Microsoft Corporation, Redmond, Wash., US), Apple Inc.'s Safari® browser (registered mark of Apple Inc., Cupertino, Calif. US), or the Firefox® browser (registered mark of Mozilla Foundation, Mountain View, Calif. US).
Through the client devices 102a-n, users 112a-n can communicate over the network 106 with each other and with other systems and devices coupled to the network 106. As shown in
The server device 104 shown includes a server executing a social network engine application program, also known as a social network engine 120. The social network engine 120 allows users, such as user 112a, to interact with and participate in a social network. A social network can refer to a computer network connecting entities, such as people or organizations, by a set of social relationships, such as friendship, co-working, or information exchange. Of course, a social network can refer to a computer application or data connecting such entities by such social relationships. Non-limiting examples of social network services include the Google+® service (registered mark of Google Inc., Mountain View, Calif., US), the Facebook® service (registered mark of Facebook, Inc. Palo Alto, Calif., US), and the Orkut® service (registered mark of Google Inc., Mountain View, Calif., US).
Social networks can include any of a variety of suitable arrangements. An entity or member of a social network can have a profile and that profile can represent the member in the social network. The social network can facilitate interaction between member profiles and allow associations or relationships between member profiles. Associations between member profiles can be one or more of a variety of types, such as friend, co-worker, family member, business associate, common-interest association, and common-geography association. Associations can also include intermediary relationships, such as friend of a friend, and degree of separation relationships, such as three degrees away.
Associations between member profiles can be reciprocal associations. For example, a first member can invite another member to become associated with the first member and the other member can accept or reject the invitation. A member can also categorize or weigh the association with other member profiles, such as, for example, by assigning a level to the association. For example, for a friendship-type association, the member can assign a level, such as acquaintance, friend, good friend, and best friend, to the associations between the member's profile and other member profiles. In one embodiment, the social network engine 120 can determine the type of association between member profiles, including, in some embodiments, the degree of separation of the association and the corresponding weight or level of the association.
Similar to the client devices 102a-n, the server device 104 shown includes a processor 116 coupled to a computer-readable memory 118. The server device 104 is in communication with a social network database 130. Server device 104, depicted as a single computer system, may be implemented as a network of computer processors. Examples of a server device 104 are servers, mainframe computers, networked computers, a processor-based device, and similar types of systems and devices. Client processor 110 and the server processor 116 can be any of a number of computer processors, such as processors from Intel Corporation of Santa Clara, Calif., US and Advanced Micro Devices, Inc. (AMD) of Sunnyvale, Calif., US.
Memory 118 contains a social network engine application program, also known as a social network engine 120. The social network engine 120 facilitates members, such as user 112a, interacting with and participating in a social network. A social network can include profiles that can be associated with other profiles. Each profile may represent a member and a member can be, for example, a person, an organization, a business, a corporation, a community, a fictitious person, or other entity. Each profile can contain entries, and each entry can include information associated with a profile. Examples of entries for a person profile can include information regarding relationship status, birth date, age, children, sense of humor, fashion preferences, pets, hometown location, passions, sports, activities, favorite books, music, TV, or movie preferences, favorite cuisines, email addresses, location information, IM name, phone number, address, skills, career, or any other information describing, identifying, or otherwise associated with a profile. Entries for a business profile can include market sector, customer base, location, supplier information, net profits, net worth, number of employees, stock performance, or other types of information associated with the business profile. Additionally, entries within a profile can include associations with other profiles. Associations between profiles within a social network can include, for example, friendships, business relationships, acquaintances, community associations, activity partner associations, common interest associations, common characteristic associations, or any other suitable type of association between profiles.
A social network can also include communities. Communities within a social network can represent groups of members sharing common interests or characteristics. Communities can include sub-communities, and multiple communities can be arranged into global communities. Sub-communities can include groups of profiles within a larger community that share common interests or characteristics independent from the entire community. For example, a “basketball players” community can include members who enjoy playing basketball from all around the world. A sub-community within the basketball community can include members specific to a local geographical area. Thus, users in California can form a “California basketball players” community. Global communities can include groups of communities sharing similar characteristics. For example, the “basketball players” community and a “basketball watchers” community can be grouped under a global “basketball” community.
Server device 104 also provides access to storage elements, such as a social network storage element, in the example shown in
It should be noted that some embodiments may include systems having different architecture than that which is shown in
For example, edge 220 and edge 222 each include an association between Profile A at vertex 202 and Profile D at vertex 208. The edge 220 represents a business association, and the edge 222 represents a friendship association. Profile A is also associated with Profile E by a common characteristic association including edge 218. The association between Profile A and Profile E may be more attenuated than the association between Profiles A and D, but the association can still be represented by the social network depicted in
Each member represented by the Profiles A-F including the vertices 202-212, for purposes of illustration, is a person. Other types of members can be in social network 200. The associations 218-234 illustrated in
Other embodiments may include directed associations or other types of associations. Directed associations associate a first profile with a second profile while not requiring the second profile to be associated with the first profile. For example, in a directed chart, Profile A can be associated by a friendship association with Profile B, and Profile B can be unassociated by a friendship connection with Profile A. Thus, a display of Profile A's friends would include Profile B, but a display of Profile B's friends would not include Profile A.
Within a social network, a degree of separation can be determined for associated profiles. One method of determining a degree of separation is to determine the fewest number of edges of a certain type separating the associated profiles. This method of determining a degree of separation can produce a type-specific degree of separation. A type-specific degree of separation is a degree of separation determined based on one particular type of association. For example, a Profile A has a friend association degree of separation of two from Profile E. The fewest number of friendship associations between Profile A and Profile E is two—the friendship association including edge 220 and the friendship association including edge 234. Thus, for the associated Profiles A and E, the degree of friendship separation, determined according to one aspect of one embodiment disclosed herein, is two.
Another type-specific degree of separation can also be determined for Profiles A and E. For example, a common characteristic degree of separation can be determined by determining the fewest number of common characteristic associations separating Profile A and Profile E. According to the embodiment depicted in
According to other aspects of some embodiments, the degree of separation may be determined by use of a weighting factor assigned to each association. For example, close friendships can be weighted higher than more distant friendships. According to certain aspects of some embodiments using a weighting factor, a higher weighting factor for an association can reduce the degree of separation between profiles and lower weighting factors can increase the degree of separation. This can be accomplished, for example, by establishing an inverse relationship between each association and a corresponding weighting factor prior to summing the associations. Thus, highly weighted associations contribute less to the resulting sum than lower weighted associations.
Some embodiments enable the identification of images that are likely instances of image theft and/or unauthorized reuse.
In some embodiments, detection and determination is performed using a combination of watermarking, fingerprinting, and/or logging to track the spread of images across a social network.
For example, suppose a malicious user X steals a photo from user Y in order to create a so-called sock puppet (on-line identity used for inappropriate purposes such as deception). In some embodiments, user Y uploads the photo, and an operator of a social networking site modifies the uploaded photo to include EXIF meta data or the like regarding the location and use of this photo. X downloads this photo and attempts to use it in a profile of his or her own. The operator of the social networking site detects the EXIF meta data or the like and determines the use is inappropriate. Optionally, the use is then denied and/or user Y is advised of the inappropriate actions by user X.
In some embodiments, a false positive can be handled, for example, as follows. In the event user W takes a photo from his or her friend user Z in order to share it on user W's account (for example, not using built-in re-sharing options). Using the same approach as in the previous paragraph, the EXIF data can be used to detect the reuse of the photo; however, other connections such as social graph distance (as explained, for example, in connection with
Refer again to
Once the photo is modified and made visible to others, user X accesses social network engine 120 from his or her client 102b over network 106. User X may view the photo on, for example, user Y's social network page. User X downloads the photo from social network database 130 by interacting with social network engine 120 on server device 104 via client 102b over network 106. User X then attempts to use the photo in his or her own profile or on a false profile created by user X. In doing this, user X attempts to upload the photo from client 102b to social network engine 120 over network 106, as per step 304. Engine 120 receives the purported upload of the photo from user X. In some embodiments, before storing the photo in social network database 130 in step 310, social network engine 120 verifies the photo for legitimacy as in decision blocks 306 and 314. In other embodiments, the photo may first be stored in social network database 130 but is not made visible to others until it is verified. Here, social network engine 120 in step 306 (“YES” branch) detects the modification made when user Y uploaded the photo and determines in decision block 314 (“NO” branch) that use by user X is not appropriate. Storage in social network database 130 may be denied, or if already stored, the photo is not made visible to others and may be removed. Other actions may be taken. Remedial action in general is seen at step 316. Other examples include notifying user Y of the attempted inappropriate use and/or admonishing user X or taking other appropriate action against him or her.
On the other hand, if the photo is marked but the use is acceptable, as per the “YES” branch of decision block 314, the photo may be stored, or if already stored, made available for appropriate use.
Processing continues at step 312.
A variety of techniques can be used in decision block 314 to determine whether use is allowed. In some cases, use of a tagged photo could be disallowed by anyone but the original uploader, unless the photo is shared via an approved built-in re-sharing feature. In some cases, use of a tagged photo could be disallowed by anyone but the original uploader, unless the original uploader gives consent after being contacted by e-mail or the like once the attempted use of the marked photo is detected. For these purposes, verification module 504, discussed elsewhere herein, can include logic to compare an identifier of an original uploader, associated with a digital visual medium, against an identifier of a subsequent putative uploader of the medium. In some cases, as discussed above, other connections such as social graph distance (as explained, for example, in connection with
A variety of techniques can also be used to carry out watermarking, fingerprinting, and/or logging in step 308, and subsequent detection thereof in step 306. Some embodiments, as noted, use EXIF meta data. Exchangeable image file format (EXIF) is a standard that specifies the formats for images, sound, and ancillary tags used by digital cameras (including smartphones), scanners and other systems handling image and sound files recorded by digital cameras. The specification uses certain existing file formats with the addition of specific metadata tags: JPEG DCT (discrete cosine transform) for compressed image files, and TIFF Rev. 6.0 (RGB color model or YCbCr color space) for uncompressed image files. The metadata tags defined in the EXIF standard include date and time information; camera settings; a thumbnail for previewing the picture on the camera's LCD screen, in file managers, or in photo manipulation software; descriptions; geotags (geographical location where the photo was taken); and/or copyright information. In some cases, the uploaded image already has EXIF tags and marking step 308 includes merely noting the tags in a database; in other cases, the uploaded image does not have EXIF tags and the same are added in step 308; in still other cases, the uploaded image already has EXIF tags but the same are not suitable for marking the photo for purposes of carrying out step 314, and thus additional EXIF tags are added in step 308. Meta data tags can, in general, be stored in the picture files. Tags can be detected, for example, with a parser.
Use of EXIF meta data tags is but one non-limiting example of a suitable marking process. Digital watermarking is the process of embedding information into a digital signal, such as video or pictures, which information may be used to verify its authenticity or the identity of its owners, in the same manner as paper bearing a watermark for visible identification. Digital watermarking is used in one or more embodiments. In one or more embodiments, digital watermarks are stored in the images themselves and are detected using pattern recognition techniques.
Digital fingerprinting involves extracting several unique features of a digital video or still picture, which can be stored as a fingerprint of the video or still content. The evaluation and identification of video or still content is then performed by comparing the extracted video fingerprints. The creation of a digital fingerprint involves the use of software that decodes the video or still data and then applies several feature extraction algorithms. Video or still fingerprints are highly compressed when compared to the original source file and can therefore be easily stored in databases for later comparison. Digital fingerprinting is used in one or more embodiments. In one or more embodiments, digital fingerprints are stored and detected in a manner similar to digital watermarks, but with a longitudinal component. A longitudinal component refers to an analysis that includes a time axis. For example, data is tracked regarding where the photo is being used (e.g., a geographic location such as New York or New Jersey) over time. If there is a sudden use of the photo in a geographically diverse location (say, Hong Kong), this suggests some kind of anomalous use (by way of example and not limitation, a use in a geographically diverse location within a time frame where it is physically impossible to travel between the locations in the elapsed time might be particularly indicative of a potential inappropriate use).
Logging can be carried out in one or more embodiments. A log is kept for each detected instance of a digital image, and analysis (e.g., time series or longitudinal) is used to determine the spreading of the image. A log of each account where a particular image occurs may be kept and the spreading of the image may be tracked. Longitudinal analysis is discussed above. A time series analysis could detect, for example, a sudden spike in usage. Logging includes maintaining a database of where a particular image is located in the social network.
Given the discussion thus far, it will be appreciated that, in general terms, an exemplary method of monitoring for inappropriate use of a digital visual medium includes the step 304 of obtaining, at a social network server 104, a request from a first user (e.g., 112a) to upload a digital visual medium. A digital visual medium is expressly defined herein to include digital photographs and/or digital videos. This step can be carried out, for example, with user interface 502. A further step 308 includes marking the digital visual medium for identification thereof. This step can be carried out, for example, with digital visual medium verification module 504. A still further step (repetition of step 304 by another user) includes obtaining, at the social network server, a request from a second user (e.g., 112b) to upload the digital visual medium. This step can be carried out, for example, with user interface 502. An even further step (e.g., decision blocks 306 and 314) includes, based on the marking, determining whether the request from the second user to upload the digital visual medium is inappropriate. This step can be carried out, for example, with digital visual medium verification module 504.
Marking the digital visual medium for identification thereof can be implemented in a number of different fashions. In one or more instances, the marking facilitates associating the digital visual medium with a legitimate user, such as the first user mentioned above.
In some instances, the marking includes tagging with EXIF meta data; for example, using tagger 508. In such cases, the determining can include detecting the EXIF meta data with a parser 506.
In some instances, the marking includes digital watermarking; for example, with module 510. In such cases, the determining can include detecting the digital watermarking via pattern recognition (for example, with module 512).
In some instances, the marking includes digital fingerprinting; for example, with module 510. In such cases, the determining can include detecting the digital fingerprinting via pattern recognition with a longitudinal component (for example, with module 512).
In some instances, the marking includes storing, in a database (e.g., database 130), in association with the digital visual medium, a first facial recognition score; for example, with module 510. The score can be generated, for example, with pattern recognition module 512 or another component of engine 120. In such cases, the determining can include calculating a second facial recognition score on the digital visual medium sought to be uploaded by the second user (e.g., with pattern recognition module 512 or another component of engine 120); and comparing the first and second facial recognition scores for a match, with pattern recognition module 512 or another component of engine 120.
Facial recognition can be used in conjunction with any of the other techniques (e.g., tagging, watermarking, fingerprinting, or logging) or as an alternative. Facial models are, in and of themselves, known to the skilled artisan. Any suitable traditional or three-dimensional technique can be used (e.g., geometric, photometric, Principal Component Analysis using eigenfaces, Linear Discriminate Analysis, Elastic Bunch Graph Matching using the Fisherface algorithm, the Hidden Markov model, and/or the neuronal motivated dynamic link matching).
Traditional techniques typically identify faces by extracting landmarks, or features, from an image of the subject's face. For example, a given technique may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. The extracted features are then used to search for other images with matching features. In some cases, the significant points of a face can be re-constructed, such as the end of nose, eyes, tips of ears, corners of mouth. The distance between all these points is typically almost identical on a person's photo, even if the person changes facial expression, moves, etc.
The facial model can be used to see if an attempt is being made to “re-upload” a photo or video. It can be determined whether that photo is of a person for which there is a photo in a database such as database 130. In some instances, the photo or video for which the upload or other use is attempted need not be the exact same photo in database 130 or the like. In some instances, facial recognition is used to quickly narrow down a selection of photos or videos to be reviewed so as to only look at photos or videos related to the individual in question.
In some embodiments, a further step 316 includes taking remedial action if the determining step indicates that the request from the second user to upload the digital visual medium is not appropriate. This step can be carried out, for example, with digital visual medium verification module 504.
In some embodiments, a further step (e.g., “NO” branch of 314) includes refraining from storing the digital visual medium in response to the request from the second user, if the determining step indicates that the request from the second user to upload the digital visual medium is not appropriate. This step can be carried out, for example, with digital visual medium verification module 504 including, in some cases, social network database interface 514.
In some embodiments, further steps include inaccessibly storing the digital visual medium in response to the request from the second user (e.g., storing in database 130 using interface 514 after step 304 for the second user), and refraining from making the digital visual medium available in response to the request from the second user, if the determining step indicates that the request from the second user to upload the digital visual medium is not appropriate (e.g., “NO” branch of 314; interface 514 will not permit access to the stored digital visual medium).
In another aspect, an exemplary method of monitoring for inappropriate use of a digital visual medium includes, as shown at 304, obtaining, at a social network server 104, a request from a first user (e.g., 112a) to upload a digital visual medium. This step can be carried out, for example, with user interface 502. A further step (one non-limiting detailed form of 308) includes tagging the digital visual medium with EXIF meta data (e.g., using tagger 508). A still further step (repetition of step 304 by another user) includes obtaining, at the social network server, a request from a second user (e.g., 112b) to upload the digital visual medium. This step can be carried out, for example, with user interface 502. Additional steps include, responsive to the request from the second user, detecting the EXIF meta data with a parser 506; and, based on the EXIF meta data detected by the parser, determining (e.g., step 314) whether the request from the second user to upload the digital visual medium is appropriate.
Referring to flow chart 600 of
A further step 606 includes, responsive to the identifying step 604, generating at least one facial recognition score for at least one image of a face in at least one digital visual medium associated with the potentially illegitimate social network account (for example, using module 512). Still further steps include, at step 608, comparing the at least one facial recognition score to a database of facial recognition scores for legitimate users of the social network; and, at step 616, initiating remedial action if the comparing indicates a potential illegitimate use of the digital visual medium associated with the potentially illegitimate social network account. Furthermore in this regard, remedial action could be of any kind discussed elsewhere herein, and could be initiated in any case of a match; however, in many cases, as seen at step 614, an analysis can be conducted to see if the use of the matching photo or video is appropriate (e.g., using any of the criteria discussed with regard to step 314 in
Steps 608-616 could be carried out, for example, with verification module 504.
Processing continues at step 612.
Thus, one or more embodiments include investigating suspicious accounts, carrying out facial analysis on photos or videos, and looking for users that match these facial recognition models; then further investigating to see if these users are having their identities stolen. An operator of a social networking site determines that a certain account may be suspicious. Based on that, facial recognition is carried out on images of people in photos uploaded to that account and these are compared to a database of known legitimate users of the service. If it appears as if there is illegitimate use, appropriate action is taken, such as: denying posting of the photos or videos, or advising the legitimate subject of the photo or video, or sending a warning to the person attempting to establish the sock puppet account, or the like.
Exemplary System and Article of Manufacture Details
One or more embodiments can employ hardware aspects or a combination of hardware and software aspects. Software includes but is not limited to firmware, resident software, microcode, etc. One or more embodiments or elements thereof can be implemented in the form of an article of manufacture including a machine readable medium that contains one or more programs which when executed implement such step(s); that is to say, a computer program product including a tangible computer readable recordable storage medium (or multiple such media) with computer usable program code configured to implement the method steps indicated, when run on one or more processors. Furthermore, one or more embodiments or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform, or facilitate performance of, exemplary method steps.
Yet further, in another aspect, one or more embodiments or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) specialized hardware module(s), (ii) software module(s) executing on one or more general purpose or specialized hardware processors, or (iii) a combination of (i) and (ii); any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a tangible computer-readable recordable storage medium (or multiple such media). Appropriate interconnections via bus, network, and the like can also be included.
In one or more embodiments, memory 108, 118, 430 configures a processor 110, 116, 420 to implement one or more methods, steps, and functions (collectively referred to as a process). The memory 108, 118, 430 could be distributed or local and the processor 110, 116, 420 could be distributed or singular. Different steps could be carried out by different processors.
As is known in the art, part or all of one or more aspects of the methods and apparatus discussed herein may be distributed as an article of manufacture that itself includes a tangible computer readable recordable storage medium having computer readable code means embodied thereon. The computer readable program code means is operable, in conjunction with a computer system (including, for example, system 102, 104, 400), to carry out all or some of the steps to perform the methods or create the apparatuses discussed herein. A computer readable medium may, in general, be a recordable medium (e.g., floppy disks, hard drives, compact disks, EEPROMs, or memory cards) or may be a transmission medium (e.g., a network including fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store information suitable for use with a computer system may be used. The computer-readable code means is any mechanism for allowing a computer to read instructions and data, such as magnetic variations on a magnetic media or height variations on the surface of a compact disk. The medium can be distributed on multiple physical devices (or over multiple networks). As used herein, a tangible computer-readable recordable storage medium is defined to encompass a recordable medium, examples of which are set forth above, but is not defined to encompass a transmission medium or disembodied signal.
The computer systems and servers and other pertinent elements described herein each typically contain a memory that will configure associated processors to implement the methods, steps, and functions disclosed herein. The memories could be distributed or local and the processors could be distributed or singular. The memories could be implemented as an electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term “memory” should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by an associated processor. With this definition, information on a network is still within a memory because the associated processor can retrieve the information from the network.
Accordingly, it will be appreciated that one or more embodiments can include a computer program including computer program code means adapted to perform one or all of the steps of any methods or claims set forth herein when such program is run, for example, on system 102, 104, 400, and the like, and that such program may be embodied on a tangible computer readable recordable storage medium. As used herein, including the claims, a “server” includes a physical data processing system running a server program. It will be understood that such a physical server may or may not include a display, keyboard, or other input/output components. Furthermore, it should be noted that any of the methods described herein can include an additional step of providing a system including distinct software modules embodied on one or more tangible computer readable storage media. All the modules (or any subset thereof) can be on the same medium, or each can be on a different medium, for example. The modules can include any or all of the components shown in the figures (e.g. social networking engine 120, digital visual medium verification module 504; and social network database interface module 514). Social networking engine 120 can optionally include tagger 508, parser 506, digital fingerprinting and/or watermarking module 510, and pattern recognition module 512. In other instances, some or all of these components could be separate from digital visual medium verification module 504. The method steps can then be carried out using the distinct software modules of the system, as described above, executing on one or more hardware processors. Further, a computer program product can include a tangible computer-readable recordable storage medium with code adapted to be executed to carry out one or more method steps described herein, including the provision of the system with the distinct software modules. In one or more embodiments, memories 108, 118, 430 include tangible computer-readable recordable storage media as well as (volatile) memory on or accessible to the processor; code on one or more tangible computer-readable recordable storage media is loaded into the volatile memory and configures the processors to implement the techniques described herein.
Accordingly, it will be appreciated that one or more embodiments can include a computer program including computer program code means adapted to perform one or all of the steps of any methods or claims set forth herein when such program is implemented on a processor, and that such program may be embodied, in a non-transitory manner, on a tangible computer readable recordable storage medium. Further, one or more embodiments can include a processor including code adapted to cause the processor to carry out one or more steps of methods or claims set forth herein, together with one or more apparatus elements or features as depicted and described herein.
Although illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that those precise embodiments are non-limiting, and that various other changes and modifications may be made by one skilled in the art.
Number | Name | Date | Kind |
---|---|---|---|
7359894 | Liebman et al. | Apr 2008 | B1 |
7565139 | Neven, Sr. et al. | Jul 2009 | B2 |
7593740 | Crowley et al. | Sep 2009 | B2 |
7613769 | Hess | Nov 2009 | B1 |
7680770 | Buyukkokten et al. | Mar 2010 | B1 |
7716140 | Nielsen et al. | May 2010 | B1 |
8010459 | Buyukkokten et al. | Aug 2011 | B2 |
8015119 | Buyukkokten et al. | Sep 2011 | B2 |
8019875 | Nielsen | Sep 2011 | B1 |
8209330 | Covell et al. | Jun 2012 | B1 |
20040148347 | Appelman et al. | Jul 2004 | A1 |
20050060417 | Rose | Mar 2005 | A1 |
20060047725 | Bramson | Mar 2006 | A1 |
20080134294 | Mattox et al. | Jun 2008 | A1 |
20090165031 | Li et al. | Jun 2009 | A1 |
20090327488 | Sampat et al. | Dec 2009 | A1 |
20100191824 | Lindsay | Jul 2010 | A1 |
20110038512 | Petrou et al. | Feb 2011 | A1 |
20110179117 | Appelman et al. | Jul 2011 | A1 |
20110202968 | Nurmi | Aug 2011 | A1 |
20120063635 | Matsushita et al. | Mar 2012 | A1 |
20120109858 | Makadia et al. | May 2012 | A1 |
20120250951 | Chen | Oct 2012 | A1 |
Number | Date | Country |
---|---|---|
WO2011017653 | Oct 2011 | WO |
Entry |
---|
Jason Kincaid, “Google Apps to Get Unified Search in the Not-Too-Distant Future”. downloaded from http://techcrunch.com/2010/05/20/google-apps-to-get-unified-search-in-the-not-too-distant-future/ on Jul. 14, 2012. |
“Google Unified Search” downloaded from http://www.ghacks.net12009/07/20/google-unified-search/ on Jul. 14, 2012. |
“Find my Face” downloaded from http://support.google.com/plus/bin/answer.py?hl=en&answer=2370300 on Jun. 20, 2012. |
“Put on Your Sunglasses! Facial Recognition Is Going Social” downloaded from http://socialmediatoday.com/e-marketing-sensible-folk/278367/facesearch on Nov. 1, 2011. |
“Face Recognition Study—FAQ” downloaded from http://www.heinz.cmu.edut/˜acquisti/face-recognition-study-FAQ/ on Nov. 1, 2012. |
“Google Goggles” downloaded from http://www.google.com/mobile/goggles/#text on Nov. 1, 2011. |
“Content-based image retrieval” downloaded from http://en.wikipedia.org/wiki/Content-based—image—retrieval on Jun. 21, 2012. |
Tamara L. Berg, et al. “Names and Faces”. Preprint submitted to IJCV. U. Cal. at Berkeley, Dept. of Computer Science (undated). |
Dan Fredinburg., “Image Search Privacy Protection Techniques” unpublished U.S. Appl. No. 13/553,500, filed Jul. 19, 2012. |
“Digital watermarking” downloaded from http://en.wikipedia.org/wiki/Digital—watermarking on Apr. 25, 2012. |
“Digital video fingerprinting” downloaded from http://en.wikipedia.org/wiki/Digital—video—fingerprinting on Apr. 25, 2012. |
“Exchangeable image file format” downloaded from http://en.wikipedia.org/wiki/Exchangeable—image—file—format on Apr. 25, 2012. |
“Google+” downloaded from http://en.wikipedia.org/wiki/Google—%2B on Apr. 25, 2012. |
“Sockpuppet (Internet)” downloaded from http://en.wikipedia.org/wiki/Sockpuppet—(Internet) on Apr. 25, 2012. |
“Facial recognition system” downloaded from http://en.wikipedia.org/wiki/Facial—recognition—system on May 15, 2012. |
Chloe Albaesius, Google+ Photos get Automatic ‘Find my Face’ Recognition, PC Magazine Dec. 8, 2011 pp. 1-4. |
Anonymous, “How about a facial recognition Google Search,” downloaded from http://productforums.google.com/forum/ on May 15, 2012. |