System and method for controlling content upload on a network

Information

  • Patent Grant
  • 11693928
  • Patent Number
    11,693,928
  • Date Filed
    Monday, January 13, 2020
    5 years ago
  • Date Issued
    Tuesday, July 4, 2023
    2 years ago
Abstract
A system and method for protecting copyright in content distributed online, in combination with specified business rules. A portion of content presented for upload on a network is analyzed to detect an image associated with a content owner; the image is compared with reference images to identify the content owner; and business rules are applied to control unauthorized uploading of the content. The identifier may be a logo included in the content as a digital graphic, or a non-visual marker. Analysis is advantageously performed on a sample of video frames or a segment of preselected length. If the content is found to be copyrighted, and the attempted upload is unauthorized, uploading may or may not be permitted, and the user may or may not be charged a fee for subsequent access to the content.
Description
FIELD OF THE DISCLOSURE

This disclosure relates to determining the source of audio or video content available on a network (e.g. the Internet), and using that information to enforce copyright protection and/or business rules for that content.


BACKGROUND OF THE DISCLOSURE

Many people upload copyrighted content to websites without authorization. Websites generally build costly safeguards into their infrastructure to prevent (or minimize) copyright infringement.


SUMMARY OF THE DISCLOSURE

The present disclosure provides a system and method for protecting copyright in content distributed online, in combination with specified business rules. In accordance with an aspect of the disclosure, this is done by analyzing a portion of content presented for upload on a network to detect an image associated with a content owner; comparing a detected image with a set of reference images to identify the owner of the content; and applying business rules to control unauthorized uploading of the content. This image may be a logo of the content owner included in the content as a digital online graphic. Alternatively, the image may be a human face appearing in a video, with the analysis including an automated face recognition procedure. In a case where the content comprises a video, the analysis is advantageously performed on a sample of video frames, or on a segment having a preselected length.


The comparison between the detected image and the reference images may include determining a degree of coincidence between the detected image and a reference image; if the degree of coincidence meets a predetermined criterion, a requirement for additional analysis of the image (e.g. human inspection) may be reported.


The application of business rules may include comparing an identifier of a user presenting the content with a set of authorized user identifiers associated with the content owner; permitting uploading of the content if the user is determined to be authorized; and disposing of the content if the user is determined to be unauthorized. If the user is unauthorized, uploading by that user may still be permitted with the user being charged a fee for subsequent access to the uploaded content.


In accordance with another aspect of the disclosure, a system includes a server configured to implement a method with the above-described features.


The system and method disclosed herein provide a simple, effective way to identify content from the content owners who provide explicit visual cues or non-visual markers so that downstream receivers of the content can use content analysis techniques to determine the form of the content and then to implement appropriate business rules.


The foregoing has outlined, rather broadly, the preferred features of the present disclosure so that those skilled in the art may better understand the detailed description of the disclosure that follows. Additional features of the disclosure will be described hereinafter that form the subject of the claims of the disclosure. Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiment as a basis for designing or modifying other structures for carrying out the same purposes of the present disclosure and that such other structures do not depart from the spirit and scope of the disclosure in its broadest form.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a system in which a user may download copyrighted content having a logo.



FIG. 2 schematically illustrates a system including a logo detection and business rules engine, in accordance with an embodiment of the disclosure.



FIGS. 3A and 3B illustrate logos in the form of a digital graphic and human face, respectively.



FIG. 4 is a flowchart showing steps in an automated method for detecting copyrighted content, in accordance with an embodiment of the disclosure.



FIG. 5 is a flowchart showing steps in an automated method for detecting a logo and applying relevant business rules, in accordance with an embodiment of the disclosure.



FIG. 6 is a flowchart illustrating an automated business process for enforcing a content owner's copyright, in accordance with another aspect of the disclosure.





DETAILED DESCRIPTION

An embodiment will be described below in which video content includes a digital on-line graphic, which serves as a logo for the content owner. It will be appreciated, however, that professionally produced content typically has numerous features (both visual and aural) which may serve as effective identifiers for the content owner, and therefore function as a logo for the owner.



FIG. 1 schematically illustrates downloading of video content 1 to a user device 2 (typically a personal computer). The user of the device sends a request for image content via the Internet 10. The image source 11 may be a publisher or distributor of movies, TV shows, photographs or the like. The image source 11 retrieves the image from a storage device 12 and makes it available for download by the user. The content includes a logo 3 (typically a semi-transparent shape in the lower right corner of the display). The content 1 is typically protected by copyright, so that the user is authorized to view the content but not retain a copy of it. However, in the absence of suitable detection software for copyrighted content, the user still is able to upload the content to a user space 17, accessible via a network 15 maintained by an online application provider such as Yahoo!®, and store the content in storage 18. Typically the content is sent by the user via the Internet to an ingest server 16 of the network 15. It will be appreciated that the video source 11 and storage 12 may themselves be part of network 15.



FIG. 2 schematically illustrates an embodiment of the disclosure where the network 15 includes a logo detection and rules engine 20 for processing incoming content before that content is accepted by ingest server 16. Although engine 20 is shown separate from ingest server 16, it will be appreciated that the two may operate on the same server hardware. A library of logos is maintained in a storage device 21, for comparison with incoming video content.


Since most instances of theft of copyrighted content involve premium entertainment content, a wide range of content may be protected by comparing with a relatively small sample size of logos. When the logo is a digital graphic or “bug,” the task of finding a logo is simplified by its predictable placement in a corner 24 of a video frame, as illustrated in FIG. 3A. Alternatively, the logo detection may involve recognition of a human face 25 appearing in the frame (FIG. 3B).



FIG. 4 illustrates steps in a method for detecting a logo using engine 20. The incoming content is fed to the network and staged (step 31). The incoming content then undergoes decompression and transcoding (step 32), e.g. conversion to flash video. The decompressed video, or a portion of it, is then analyzed to detect a logo (step 33). Content analysis techniques such as image recognition may be used to detect the presence of a logo, which may or may not be visible to a human viewer of the content. It is generally not necessary to analyze every frame of the video; the video may be sampled to yield a predetermined number of frames, or a segment of a preselected length may be broken out. For example, it is convenient to analyze a segment about 2½ minutes in length, to overlap the longest expected television commercial break, and thus capture at least some portion of a copyrighted program.


The selected frames or segments are then compared with known logos in predictable spots in each frame (step 34). If a logo is detected in the incoming video, the engine applies business rules (step 35) to determine whether the video content is to be uploaded, discarded, returned to the sender, or uploaded with fees charged to the sender, as discussed in more detail below.



FIG. 5 is a flowchart illustrating additional details of the logo detection and comparison process, according to an embodiment. The video content is received, decompressed and analyzed using content analysis techniques (steps 401, 402). The video content, or a sample thereof, may be analyzed for an on-screen graphic or for some other identifier. Content analysis techniques may include face recognition, voice recognition, OCR, or detection of auxiliary information available in the content (e.g. digital cue-tones indicating broadcast advertising insertion points, or closed caption text recognition). While a logo is often a visible on-screen graphic marking the image (or video frames), automated content analysis techniques are equally effective for logos not visible to the viewer.


Once a logo is detected, it is compared with the sample logos previously provided by the content providers (artists, publishers, distributors, etc.) and stored in database 21 (steps 403, 404). If the logo is clearly identified, that identifying information for the video content is input to a business rules engine for further action (step 405). In an embodiment, the logo may not precisely coincide with one of the sample logos, but may coincide to some predetermined degree; that is a “fuzzy” match with a known logo at, for example, 90% coincidence. If a “fuzzy” match is found, the rules engine may issue a report alerting a human reviewer to the appearance of the logo. The content may then be subjected to other processes, including off-line review (step 409), to determine a more precise match with a known logo.


If no logo is recognized, uploading of the content is permitted (step 408). If the content includes a known logo, the business rules engine determines the disposition of the content (step 406). For example, each known logo may have a list of approved users (aggregators, affiliates, or simply “approved uploaders”) associated therewith. If the user attempting to upload the content is on the approved list (step 407, 410), then uploading is permitted. Otherwise, the rules engine determines that the content is protected, and the user submitting the content is unauthorized (step 411).


The business rules engine may establish several possible ways to dispose of content submitted for upload by an unauthorized user. FIG. 6 is a flowchart schematically illustrating alternative methods of disposing of content. The business rules that are applied (step 501) reflect previously established policies of the particular content owner. At the content owner's direction, the unauthorized upload may be prevented (step 502) or may be permitted (step 503) with conditions imposed on the user. When the upload of the content is prevented, the content may be simply deleted (step 503) or returned to the user (step 504). The engine may also take further action to mark the event (e.g. make an entry in a file, send a message to the content owner, etc.).


Alternatively, the content owner may choose to permit the user to upload its content, in order to derive revenue therefrom (step 510). In an embodiment, the rules engine marks the uploaded content (step 511) and keeps a record of subsequent access of the content by the user. The administrator of network 15 may then charge a fee each time the content is played, thereby providing revenue for the content owner (step 512). In this instance, the rules engine may attach attribution information to the content before it is uploaded.


The rules applied to unauthorized users need not be the same for every content owner, or every item of content. For example, one owner may choose to block all attempted uploads of its content, while another owner may choose to permit uploads of preselected items; a user attempting to upload a popular, recently released movie may be charged a higher fee than for an older movie.


It will be appreciated that the system and method disclosed herein may be used to protect both visual and non-visual (e.g. aural or tonal) copyrighted content. In particular, a tonal logo may be used to identify the owner of video or audio content.


While the disclosure has been described in terms of specific embodiments, it is evident in view of the foregoing description that numerous alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the disclosure is intended to encompass all such alternatives, modifications and variations which fall within the scope and spirit of the disclosure and the following claims.

Claims
  • 1. A method comprising: receiving, at a content server, a request from a second user to upload a digital content item to a website, the digital content item associated with a first user and comprising digital content, said request comprising an identifier of said second user and a compressed version of the digital content item;converting, by the content server, the compressed version of the digital content item into the digital content item by decompressing the compressed version to obtain the digital content item;digitally sampling, by the content server, the digital content item to identify a segment of the content item;analyzing, by the content server, the identified segment of the digital content item using an automated content analysis technique, and based on said analysis, detecting a portion of the content that references said first user;analyzing, via the content server, said portion, and based on said analysis, determining that said digital content item is associated with the first user;comparing, by the content server, upon determination that said digital content item is associated with the first user, said identifier of the second user with a set of identifiers corresponding to authorized users for uploading content to said website, the set of identifiers associated with the first user;determining, by the content server based on said comparison, whether the second user is authorized to upload said digital content item to said website; andcommunicating, over a network, an upload instruction to a device of said second user based on said determination.
  • 2. The method of claim 1, wherein said upload instruction facilitates said upload to said website, wherein said determination indicates that said second user is a permitted uploader.
  • 3. The method of claim 1, wherein said upload instruction restricts said upload to said website, wherein said determination indicates that said second user is not a permitted uploader.
  • 4. The method of claim 1, wherein said set of identifiers of authorized users is set by said first user.
  • 5. The method of claim 1, further comprising: determining a condition for said upload based on said comparison.
  • 6. The method of claim 5, wherein said condition indicates that said digital content item is to be deleted when said second user is not a permitted uploader.
  • 7. The method of claim 5, wherein said condition causes the digital content item to be returned to said first user when said second user is not a permitted uploader.
  • 8. The method of claim 5, wherein said condition is marked as an event when said second user is a permitted uploader, such that the upload causes a message to be sent to the first user alerting the first user of said upload.
  • 9. The method of claim 1, wherein said detected portion comprises a logo associated with the first user, wherein said determination that the first user is an approved provider of said digital content item is based on analysis of said logo compared to logos of approved content providers.
  • 10. A non-transitory computer-readable storage medium tangibly encoded with computer-executable instructions, that when executed by a content server, perform a method comprising: receiving, at the content server, a request from a second user to upload a digital content item to a website, the digital content item associated with a first user and comprising digital content, said request comprising an identifier of said second user and a compressed version of the digital content item;converting, by the content server, the compressed version of the digital content item into the digital content item by decompressing the compressed version to obtain the digital content item;digitally sampling, by the content server, the digital content item to identify a segment of the content item;analyzing, by the content server, the identified segment of the digital content item using an automated content analysis technique, and based on said analysis, detecting a portion of the content that references said first user;analyzing, via the content server, said portion, and based on said analysis, determining that said digital content item is associated with the first user;comparing, by the content server, upon determination that said digital content item is associated with the first user, said identifier of the second user with a set of identifiers corresponding to authorized users for uploading content to said website, the set of identifiers associated with the first user;determining, by the content server based on said comparison, whether the second user is authorized to upload said digital content item to said website; andcommunicating, over a network, an upload instruction to a device of said second user based on said determination.
  • 11. The non-transitory computer-readable storage medium of claim 10, wherein said upload instruction facilitates said upload to said website, wherein said determination indicates that said second user is a permitted uploader.
  • 12. The non-transitory computer-readable storage medium of claim 10, wherein said upload instruction restricts said upload to said website, wherein said determination indicates that said second user is not a permitted uploader.
  • 13. The non-transitory computer-readable storage medium of claim 10, wherein said set of identifiers of authorized users is set by said first user.
  • 14. The non-transitory computer-readable storage medium of claim 10, further comprising: determining a condition for said upload based on said comparison.
  • 15. The non-transitory computer-readable storage medium of claim 14, wherein said condition indicates that said digital content item is to be deleted when said second user is not a permitted uploader.
  • 16. The non-transitory computer-readable storage medium of claim 14, wherein said condition causes the digital content item to be returned to said first user when said second user is not a permitted uploader.
  • 17. The non-transitory computer-readable storage medium of claim 14, wherein said condition is marked as an event when said second user is a permitted uploader, such that the upload causes a message to be sent to the first user alerting the first user of said upload.
  • 18. The non-transitory computer-readable storage medium of claim 10, wherein said detected portion comprises a logo associated with the first user, wherein said determination that the first user is an approved provider of said digital content item is based on analysis of said logo compared to logos of approved content providers.
  • 19. A content server comprising: a processor; anda non-transitory computer-readable storage medium for tangibly storing thereon program logic for execution by the processor, the program logic comprising:logic executed by the processor for receiving, at the content server, a request from a second user to upload a digital content item to a website, the digital content item associated with a first user and comprising digital content, said request comprising an identifier of said second user and a compressed version of the digital content item;logic executed by the processor for converting, by the content server, the compressed version of the digital content item into the digital content item by decompressing the compressed version to obtain the digital content item;logic executed by the processor for digitally sampling, by the content server, the digital content item to identify a segment of the content item;logic executed by the processor for analyzing, by the content server, the identified segment of the digital content item using an automated content analysis technique, and based on said analysis, detecting a portion of the content that references said first user;logic executed by the processor for analyzing, via the content server, said portion, and based on said analysis, determining that said digital content item is associated with the first user;logic executed by the processor for comparing, by the content server, upon determination that said digital content item is associated with the first user, said identifier of the second user with a set of identifiers corresponding to authorized users for uploading content to said website, the set of identifiers associated with the first user;logic executed by the processor for determining, by the content server based on said comparison, whether the second user is authorized to upload said digital content item to said website; andlogic executed by the processor for communicating, over a network, an upload instruction to a device of said second user based on said determination.
CROSS REFERENCE TO RELATED APPLICATION

This application is a continuation of, and claims priority from co-pending U.S. patent application Ser. No. 12/024,572, filed Feb. 1, 2008, which is incorporated herein by reference.

US Referenced Citations (117)
Number Name Date Kind
5594796 Grube Jan 1997 A
5864241 Schreck Jan 1999 A
5933498 Schneck Aug 1999 A
6105006 Davis Aug 2000 A
6173332 Hickman Jan 2001 B1
6314408 Salas Nov 2001 B1
6314425 Serbinis Nov 2001 B1
6343323 Kalpio Jan 2002 B1
6681233 Ichikawa Jan 2004 B1
6898299 Brooks May 2005 B1
7043473 Rassool May 2006 B1
7155679 Bandaru Dec 2006 B2
7184571 Wang Feb 2007 B2
7260555 Rossmann Aug 2007 B2
7299498 Lee Nov 2007 B2
7428591 Stebbings Sep 2008 B2
7613427 Blight Nov 2009 B2
7788481 Bik Aug 2010 B2
7895311 Juenger Feb 2011 B1
7937588 Picard May 2011 B2
7945924 Li May 2011 B2
8040883 Keeler Oct 2011 B2
8122488 Hoch Feb 2012 B2
8230149 Long Jul 2012 B1
8286241 Yeo Oct 2012 B1
8411897 Srinivasan Apr 2013 B2
8583039 Kammer Nov 2013 B2
8645279 Schmelzer Feb 2014 B2
8925106 Steiner Dec 2014 B1
9037676 Lundh May 2015 B1
9097544 Dhanani Aug 2015 B2
9165125 Zarei Oct 2015 B2
10552701 Menon Feb 2020 B2
20010003195 Kajimoto Jun 2001 A1
20010051996 Cooper Dec 2001 A1
20020032905 Sherr Mar 2002 A1
20020114489 Ripley Aug 2002 A1
20020141584 Razdan Oct 2002 A1
20020164023 Koelle Nov 2002 A1
20020165811 Ishii Nov 2002 A1
20020169971 Asano Nov 2002 A1
20020194499 Audebert Dec 2002 A1
20030007662 Kaars Jan 2003 A1
20030076955 Alve Apr 2003 A1
20030110131 Alain Jun 2003 A1
20030120601 Ouye Jun 2003 A1
20030233462 Chien Dec 2003 A1
20040015445 Heaven Jan 2004 A1
20040073925 Kinoshita Apr 2004 A1
20040107368 Colvin Jun 2004 A1
20040117631 Colvin Jun 2004 A1
20040117644 Colvin Jun 2004 A1
20040117664 Colvin Jun 2004 A1
20040133797 Arnold Jul 2004 A1
20040186993 Risan Sep 2004 A1
20040194100 Nakayama Sep 2004 A1
20040243634 Levy Dec 2004 A1
20050027999 Pelly Feb 2005 A1
20050070248 Gaur Mar 2005 A1
20050097059 Shuster May 2005 A1
20050102381 Jiang May 2005 A1
20050154681 Schmelzer Jul 2005 A1
20050175180 Venkatesan Aug 2005 A1
20050280876 Wang Dec 2005 A1
20050283611 Wang Dec 2005 A1
20060010075 Wolf Jan 2006 A1
20060062426 Levy Mar 2006 A1
20060149727 Viitaharju Jul 2006 A1
20060161635 Lamkin Jul 2006 A1
20060239503 Petrovic Oct 2006 A1
20060272031 Ache Nov 2006 A1
20070011242 McFarland Jan 2007 A1
20070033408 Morten Feb 2007 A1
20070044639 Farbood Mar 2007 A1
20070078773 Czerniak Apr 2007 A1
20070203841 Maes Aug 2007 A1
20070233875 Raghav Oct 2007 A1
20070234291 Ronen Oct 2007 A1
20070239869 Raghav Oct 2007 A1
20070255652 Tumminaro Nov 2007 A1
20080037880 Lai Feb 2008 A1
20080046915 Haeuser Feb 2008 A1
20080062456 Matsunoshita Mar 2008 A1
20080091681 Dwivedi Apr 2008 A1
20080270307 Olson Oct 2008 A1
20080289006 Hock Nov 2008 A1
20090165031 Li Jun 2009 A1
20090171970 Keefe Jul 2009 A1
20090196465 Menon Aug 2009 A1
20090286509 Huber Nov 2009 A1
20100138365 Hirvela Jun 2010 A1
20100199327 Keum Aug 2010 A1
20100223472 Alvarsson Sep 2010 A1
20100235277 Van Rensburg Sep 2010 A1
20110208761 Zybura Aug 2011 A1
20110213665 Joa Sep 2011 A1
20120066346 Virmani Mar 2012 A1
20120209852 Dasgupta Aug 2012 A1
20120215747 Wang Aug 2012 A1
20130046761 Soderberg Feb 2013 A1
20130060661 Block Mar 2013 A1
20130074046 Sharma Mar 2013 A1
20130144968 Berger Jun 2013 A1
20130236010 Schultz Sep 2013 A1
20140006486 Bintliff Jan 2014 A1
20140115720 Yi Apr 2014 A1
20140237467 Heddleston Aug 2014 A1
20150020153 Jang Jan 2015 A1
20160253670 Kim Sep 2016 A1
20170237829 Kirkeby Aug 2017 A1
20180054438 Li Feb 2018 A1
20180101678 Rosa Apr 2018 A1
20180211236 Rutherford Jul 2018 A1
20190057115 Liu Feb 2019 A1
20190097975 Martz Mar 2019 A1
20200151486 Menon May 2020 A1
20210258395 Saito Aug 2021 A1
Non-Patent Literature Citations (2)
Entry
Mourad et al (Securing Digital Content) (Year: 2007).
Chong et al (Reducing Unauthorized Content Distribution with Monitoring ) (Year: 2007).
Related Publications (1)
Number Date Country
20200151486 A1 May 2020 US
Continuations (1)
Number Date Country
Parent 12024572 Feb 2008 US
Child 16740808 US