The present invention relates to a fingerprinting method and an apparatus and computer program arranged to carry out the fingerprinting method.
Digital watermarking of content is very well known. The content may comprise any type of information, and may include one or more of audio data, image data, video data, textual data, multimedia data, a web page, software products, security keys, experimental data or any other kind of data. There are many methods for performing digital watermarking of content but, in general, they all involve adding a watermark to an item of content. This involves embedding, or adding, watermark symbols (or a watermark codeword or payload data) into the original item of content to form a watermarked item of content. The watermarked item of content can then be distributed to one or more users (or recipients or receivers).
The method used for adding a watermark codeword to an item of content depends on the intended purpose of the watermark codeword. Some watermarking techniques are designed to be “robust”, in the sense that the embedded watermark codeword can be successfully decoded even if the watermarked item of content has undergone subsequent processing (be that malicious or otherwise). Some watermarking techniques are designed to be “fragile”, in the sense that the embedded watermark codeword cannot be successfully decoded if the watermarked item of content has undergone subsequent processing or modification. Some watermarking techniques are designed such that the difference between the original item of content and the watermarked item of content is substantially imperceptible to a human user (e.g. the original item of content and the watermarked item of content are visually and/or audibly indistinguishable to a human user). Other criteria for how a watermark is added to an item of content exist.
Digital forensic watermarking is increasingly being used to trace users who have “leaked” their content in an unauthorized manner (such as an unauthorized online distribution or publication of content). For this type of watermarking process, watermark codewords specific to each legitimate/authorized receiver are used. Each of the receivers receives a copy of the original item of content with their respective watermark codeword embedded therein. Then, if an unauthorized copy of the item of content is located, the watermark codeword can be decoded from that item of content and the receiver that corresponds to the decoded watermark codeword can be identified as the source of the leak.
However, even if we assume that the watermarking scheme itself is secure (i.e. the method by which the watermark codewords are embedded in the item of content and subsequently decoded is secure), there is still a powerful attack available against any digital forensic watermarking scheme: the so-called “collusion attack”. In this type of attack, a number of users, each of whom has his own watermarked version of the item of content, form a coalition. As the watermarked versions are individually watermarked, and thus different, the coalition can spot the differences that arise from the individual watermarks in their collection of watermarked items of content. Thus, the coalition can create a forged copy of the item of content by combining bits and pieces from the various watermarked versions that they have access to. A good example of this would be by averaging these versions, or by interleaving pieces from the different versions.
Watermarking schemes alone cannot counter a collusion attack. Instead, the best way to withstand collusion attacks is by carefully selecting the sequences of watermark symbols that are used to form the watermark codewords that are then actually embedded and distributed. Such constructions are known in the literature as “traitor tracing schemes” or “fingerprinting schemes”, and the watermark codewords are known as “fingerprint-codes” or sometimes simply “fingerprints”. An important feature of such a scheme is the length of its fingerprint-codes, contrasted against the number of colluding users it can catch.
Various classes of traitor tracing schemes exist in the literature. One classification of traitor tracing schemes distinguishes between so-called “static” traitor tracing schemes and so-called “dynamic” traitor tracing schemes. For static traitor tracing schemes, it is assumed that the initial distributor of the watermarked items of content generates a single fingerprint-code for each receiver and distributes these to the receivers (as a watermark embedded within the item of content). Then, when the unauthorized copy (the “forgery”) is found, a decoding/tracing algorithm is executed on that forgery to determine which receivers colluded to produce the forgery. This then ends the process. Static traitor tracing schemes are suitable for a single/one-off distribution of items of content (e.g. a single movie) to multiple receivers. In contrast, in a dynamic traitor tracing scheme, the distributor generates a fingerprint-code for each active/connected receiver and distributes the fingerprint-codes to these receivers (as a watermark embedded within an item of content). Then, when an unauthorized copy (the “forgery”) is found, a decoding/tracing algorithm is executed on that forgery to try to identify one or more of the colluding receivers if a member of the coalition is detected, then that receiver is deactivated/disconnected (in the sense that the receiver will receive no further watermarked items of content). Then, further fingerprint-codes are distributed to the remaining active/connected receivers (as a new watermark embedded within a new/subsequent item of content). The process continues in this way until all colluding receivers have been identified and disconnected. This may be viewed as operating over a series of rounds/stages, or at a series of time points, whereby at each stage the distributor will have more information on which to base his detection of colluding receivers and possibly eliminate one or more of those colluding receivers from subsequent rounds. This is suitable for scenarios in which a series of items of content are to be distributed to the population of receivers.
Another classification of traitor tracing schemes distinguishes between so-called “probabilistic” traitor tracing schemes and so-called “deterministic” traitor tracing schemes. A traitor tracing scheme is deterministic if, when the associated tracing algorithm identifies a receiver as being part of the coalition of receivers, then there is absolute certainty that that receiver's watermarked item of content was used to form the forgery. In contrast, a traitor tracing scheme is probabilistic if, when the associated tracing algorithm identifies a receiver as being part of the coalition of receivers, there is a non-zero probability (a so-called false positive probability) that that receiver's watermarked item of content was not actually used to form the forgery, i.e. that the receiver was not part of the coalition. A deterministic traitor tracing scheme will therefore never accuse any innocent receivers of helping to generate the forgery; a probabilistic traitor tracing scheme may accuse an innocent receiver of helping to generate the forgery, but this would happen with a small false positive probability.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
A problem with deterministic traitor tracing schemes is that the size of the alphabet that is required is large—i.e. when generating the fingerprint-code for a receiver, each symbol in the fingerprint-code must be selectable from an alphabet made up of a large number of symbols. In general, watermarking schemes are more robust against forgery (e.g. by averaging different versions) if the alphabet size is small. It would therefore be desirable to have a fingerprinting scheme that makes use of a small (preferably binary) alphabet.
Current probabilistic static traitor tracing schemes can operate with a binary alphabet. However, current static traitor tracing schemes are only guaranteed to identify one of the colluding users, but not necessarily more or all of them. A solution is to iterate the scheme to identify all the colluding users, but this requires lengthy fingerprint-codes. It would be desirable to have a fingerprinting scheme that is guaranteed (at least with a certain probability) to identify all of the users who form the coalition generating forgeries and that has short fingerprint-codes.
Furthermore, current static traitor tracing schemes assume that the number of receivers who form the coalition is known beforehand (or at least an upper bound can be placed on this number). Then, when using static traitor tracing codes, this means that (in retrospect) often an unnecessarily long codeword has been used. For example, if only two people actually colluded, but one tailored the codeword to catch ten colluders, then the codewords and tracing time could be about 25 times longer than was actually necessary in that case.
Thus, current traitor tracing schemes are unable to trace any number of colluding receivers (i.e. a number that is unspecified in advance), with a small (i.e. practical) number of required watermarking symbols (i.e. a small alphabet), in a relatively short time with relatively short codewords, whilst ensuring that all the colluding receivers can be identified.
According to a first aspect of the invention, there is provided a fingerprinting method comprising, for each round in a series of rounds: providing to each receiver in a set of receivers a version of a source item of content, the source item of content corresponding to the round, wherein for the round there is a corresponding part of a fingerprint-code for the receiver, the part comprising one or more symbols, wherein the version provided to the receiver represents those one or more symbols; obtaining, from a suspect item of content one or more corresponding symbols as a corresponding part of a suspect-code; for each receiver in the set of receivers, updating a corresponding score that indicates a likelihood that the receiver is a colluding-receiver, wherein a colluding-receiver is a receiver that has been provided with a version of a source item of content that has been used to generate a suspect item of content, wherein said updating is based on the fingerprint-code for the receiver and the suspect-code; for each receiver in the set of receivers, if the score for the receiver exceeds a threshold, updating the set of receivers by removing the receiver from the set of receivers so that the receiver is not provided with a further version of a source item of content, wherein the threshold is set such that the probability that a receiver that is not a colluding-receiver has a score exceeding the threshold is at most a predetermined probability.
In essence, this involves obtaining a dynamic (or at least semi-dynamic) probabilistic fingerprinting scheme by adapting static probabilistic fingerprinting schemes to the dynamic setting. This provides improved performance over existing dynamic probabilistic fingerprinting schemes, in terms of having a reduced number of required watermarking symbols (i.e. a small alphabet) whilst making use of shorter codewords. Moreover, embodiments are able to ensure that all the colluding receivers can be identified. As the method operates over a series of rounds and may identify colluding receivers at each round, the method may be terminated early, in that the method may be stopped once a particular number (or all) of the colluding receivers have been identified i.e. the full length of (static) fingerprint-codes does not always need to be used and provided to receivers. In other words, colluding receivers may be detected earlier than otherwise possible, making use of fewer fingerprint symbols.
In some embodiments, each symbol assumes a symbol value from a predetermined set of symbol values, and the i-th symbol of the fingerprint-code for a receiver is generated as an independent random variable such that, for each symbol value in the predetermined set of symbol values, the probability that the i-th symbol of the fingerprint-code for a receiver assumes that symbol value is a corresponding probability value set for the i-th symbol position of the fingerprint-codes for the receivers.
If an obtained symbol corresponds to the i-th symbol position in the fingerprint-codes then updating the score for a receiver may comprise incrementing the score if that obtained symbol matches the i-th symbol in the fingerprint-code for that receiver and decrementing the score if that obtained symbol does not match the i-th symbol in the fingerprint-code for that receiver.
In some embodiments, each symbol assumes a symbol value from a predetermined set of symbol values, the predetermined set comprising only two symbol values. In particular, a binary symbol alphabet may be used. This is particularly useful as watermarking schemes are able to better handle situations in which only binary symbols (e.g. a 1 or a 0) need to be embedded the watermarking may be made more robust and less noticeable.
In some embodiments, the probability that the i-th symbol of a fingerprint-code for a receiver assumes a first symbol value is pi and the probability that the i-th symbol of a fingerprint-code for a receiver assumes a second symbol value is 1-pi, and if an obtained symbol corresponds to the i-th symbol position in the fingerprint-codes then updating the score for a receiver may comprise incrementing the score by √{square root over ((1−pi/pi)} if that obtained symbol is the first symbol value and the i-th symbol in the fingerprint-code for that receiver is the first symbol value and decrementing the score by √{square root over (pi/(1−pi))} if that obtained symbol is the first symbol value and the i-th symbol in the fingerprint-code for that receiver is the second symbol value. Additionally, if an obtained symbol corresponds to the i-th symbol position in the fingerprint-codes then updating the score for a receiver may comprise incrementing the score by √{square root over (pi/(1−pi))} if that obtained symbol is the second symbol value and the i-th symbol in the fingerprint-code for that receiver is the second symbol value and decrementing the score by √{square root over ((1−pi)/pi)} if that obtained symbol is the second symbol value and the i-th symbol in the fingerprint-code for that receiver is the first symbol value.
In some embodiments, the probability that the i-th symbol of a fingerprint-code for a receiver assumes a first symbol value is pi and the probability that the i-th symbol of a fingerprint-code for a receiver assumes a second symbol value is 1−pi, wherein the value pi is generated as an independent random variable having a probability density function of:
wherein δ′=arcsin(√{square root over (δ)}) such that 0<δ′<π/4, δ=1/(δcc), c is an expected number of colluding-receivers, and δc is a predetermined constant.
In some embodiments, each symbol for each fingerprint-code is generated independent of an expected number of colluding-receivers. This means that the fingerprinting scheme does not need to know in advance an estimate on the number of colluding receivers (or have some form of upper bound set on it)—instead, these embodiments can cater for scenarios in which any number of colluding receivers may participate in a coalition to generate unauthorized copies of content.
In such an embodiment, the probability that the i-th symbol of a fingerprint-code for a receiver assumes a first symbol value may be pi and the probability that the i-th symbol of a fingerprint-code for a receiver assumes a second symbol value may be 1−pi, where the value pi is generated as an independent random variable having a probability density function of:
In embodiments each symbol for each fingerprint-code is generated independent of an expected number of colluding-receivers, updating a score for a receiver may comprise, for one or more collusion-sizes, updating a score for the receiver for that collusion-size that indicates a likelihood that the receiver is a colluding-receiver under the assumption that the number of colluding-receivers is that collusion-size; and the method may then comprise, for each receiver in the set of receivers, if a corresponding score for that receiver exceeds a threshold corresponding to the collusion-size for that score, updating the set of receivers by removing that receiver from the set of receivers, wherein the thresholds are set such that the probability that a receiver that is not a colluding-receiver has a score exceeding the corresponding threshold is at most the predetermined probability.
Updating the score for a collusion-size may comprise disregarding a symbol obtained for the i-th position of the suspect-code if symbols generated for the i-th position of the fingerprint-codes are invalid for that collusion-size.
Symbols generated for the i-th position of the fingerprint-codes may be considered invalid for a collusion-size c if the generation of symbols for the i-th position of the fingerprint-codes independent of an expected number of colluding-receivers used a parameter value that would be inapplicable when generating symbols for the i-th position of fingerprint-codes dependent on an expected collusion-size of c.
Symbols for the i-th position of the fingerprint-codes may be considered invalid for a collusion-size of c if pi lies outside of the range [δ,1−δ], where δ=1/(δcc) and δc is a predetermined constant.
In some embodiments, the method comprises generating a fingerprint-code for a receiver in advance of the series of rounds. Alternatively, in some embodiments, said providing comprises generating the part of the fingerprint-code for the receiver.
In some embodiments, the version of the source item of content provided to a receiver is formed by watermarking a copy of the source item of content with the part of the fingerprint-code for the receiver.
According to another aspect of the invention, there is provided an apparatus comprising a processor arranged to carry out a fingerprinting method, wherein the method comprises, for each round in a series of rounds: providing to each receiver in a set of receivers a version of a source item of content, the source item of content corresponding to the round, wherein for the round there is a corresponding part of a fingerprint-code for the receiver, the part comprising one or more symbols, wherein the version provided to the receiver represents those one or more symbols; obtaining, from a suspect item of content one or more corresponding symbols as a corresponding part of a suspect-code; for each receiver in the set of receivers, updating a corresponding score that indicates a likelihood that the receiver is a colluding-receiver, wherein a colluding-receiver is a receiver that has been provided with a version of a source item of content that has been used to generate a suspect item of content, wherein said updating is based on the fingerprint-code for the receiver and the suspect-code; for each receiver in the set of receivers, if the score for the receiver exceeds a threshold, updating the set of receivers by removing the receiver from the set of receivers so that the receiver is not provided with a further version of a source item of content, wherein the threshold is set such that the probability that a receiver that is not a colluding-receiver has a score exceeding the threshold is at most a predetermined probability.
According to another aspect of the invention, there is provided a computer program which, when executed by a processor, causes the processor to carry out a fingerprinting method comprising, for each round in a series of rounds: providing to each receiver in a set of receivers a version of a source item of content, the source item of content corresponding to the round, wherein for the round there is a corresponding part of a fingerprint-code for the receiver, the part comprising one or more symbols, wherein the version provided to the receiver represents those one or more symbols; obtaining, from a suspect item of content one or more corresponding symbols as a corresponding part of a suspect-code; for each receiver in the set of receivers, updating a corresponding score that indicates a likelihood that the receiver is a colluding-receiver, wherein a colluding-receiver is a receiver that has been provided with a version of a source item of content that has been used to generate a suspect item of content, wherein said updating is based on the fingerprint-code for the receiver and the suspect-code; for each receiver in the set of receivers, if the score for the receiver exceeds a threshold, updating the set of receivers by removing the receiver from the set of receivers so that the receiver is not provided with a further version of a source item of content, wherein the threshold is set such that the probability that a receiver that is not a colluding-receiver has a score exceeding the threshold is at most a predetermined probability.
The computer program may be carried on a data carrying medium.
In the description that follows and in the Figures, certain embodiments are described. However, it will be appreciated that the invention is not limited to the embodiments that are described and that some embodiments may not include all of the features that are described below. It will be evident, however, that various modifications and changes may be made herein without departing from the broader spirit and scope of the invention as set forth in the appended claims.
It will be useful, first, to provide a high-level summary of the operation of the fingerprinting scheme 150 before discussing the various components in more detail. The system 150 comprises a content distributor 100 and a plurality of receivers 104 (shown in
The content distributor 100 comprises an encoding module 108 that is arranged to generate different versions (or copies) of items of content 110 (shown in
The content distributor 100 then provides the watermarked items of content 110 to the respective receivers 104. This is carried out so that the watermarked item of content 110 provided to a particular receiver 104 has embedded therein (or at least corresponds to or represents) a sequence of fingerprint symbols that form at least a corresponding part of the fingerprint-code for that particular receiver 104.
An attack 112 may be carried out to generate a new item of content 114, which is a version of the original item of content 102. This new item of content shall be referred to, herein, as a “forgery” 114. The forgery 114 may be generated from (or based on) a single watermarked item of content 110 received by a receiver 110. Alternatively, the forgery 114 may generated from (or based on) two or more of the watermarked items of content 110 received by the receivers 104, in which case the attack comprises a so-called “collusion attack” in
The content distributor 100 may then receive, or somehow obtain, a copy of the forgery 114. The content distributor 100 also comprises an analysis module 118. The content distributor 100 uses the analysis module 118 to determine a sequence of one or more fingerprint symbols that corresponds to the received forgery 114. This processing is, essentially, an inverse operation to the method by which the encoding module generates a version 110 of a source item of content 102 that corresponds to a sequence of one or more fingerprint symbols. For example, the content distributor 100 may use a watermark decoding operation corresponding to the inverse of a watermark embedding operation carried out by the encoding module 108. The sequence of one or more fingerprint symbols that the analysis module 118 identifies as corresponding to (or having been embedded within) the received forgery 114 are used to form a suspect-code (or a part thereof). The analysis module 118 then uses the suspect-code to try to identify pirates 104, i.e. to identify which receivers 104 out of the plurality of receivers 104 were part of the coalition and received watermarked items of content 110 that were used (at least in part) to create the forgery 114.
The content distributor 100 may then distribute a further item of content 102, with different fingerprint symbols embedded therein, to the receivers 104 in the same way as set out above.
For static fingerprinting or traitor tracing schemes, the generation of fingerprint-codes and the provision of watermarked items of content 110 to receivers 104 does not depend on the results of the processing performed by the analysis module 118. In contrast, for dynamic fingerprinting or traitor tracing schemes, the generation of fingerprint-codes and the provision of watermarked items of content 110 to receivers 104 does depend on the results of the processing performed by the analysis module 118. As will be discussed in more detail below, once a receiver 104 has been identified as a pirate 104 by the analysis module 118, then the content distributor 100 may stop providing that pirate 104 with items of content 102 (or watermarked items of content 110), i.e. fingerprint-codes need no longer be generated for, and provided to, the identified pirates 104. This information from the analysis module 118 is illustrated as being provided as feedback 124 to the encoding module 108. Whilst embodiments will make use of this feedback 124, the system 150 illustrated in
The item of content 102 may comprise any type of information, and may include one or more of audio data, image data, video data, textual data, multimedia data, a web page, software products, security keys, experimental data or any other kind of data which the content distributor 100 wishes to provide (or transmit or communicate or make available) to the receivers 104. Indeed, the item of content 102 may be any unit or amount of content. For example, the content distributor 100 may store video data for a movie, and each item of content 102 could comprise a number of video frames from the movie.
The content distributor 100 may be any entity arranged to distribute content to receivers 104. The content distributor 100 may have a database 120 of items of content 102, such as a repository of audio and video files. Additionally or alternatively, the content distributor 100 may generate the items of content 102 as and when required—for example, the items of content 102 may be decryption keys or conditional access information for decrypting or accessing segments of video, and these decryption keys or conditional access information may be generated as and when the segments of video are provided to the receivers 104, so that storage of the decryption keys or conditional access information in a database 120 may be unnecessary. Additionally or alternatively, the content distributor 100 may not necessarily be the original source of the items of content 102 but may, instead, receive the items of content 102 from a third party and may store and/or distribute those items of content 102 on behalf of the third party.
The content distributor 100 may comprise one or more computer systems, examples of which will be described later with reference to
In
The encoding module 108 may need to make use of various configuration parameters and/or other data in order to carry out its processing. These configuration parameters and/or other data are illustrated in
The encoding module 108 may embed symbols of a fingerprint-code into the item of content 102 by making use of any watermarking embedding technique. Such watermark embedding techniques are well-known in this field of technology. The particular choice of watermark embedding technique is not essential for carrying out embodiments, and, as the skilled person will be familiar with various watermarking embedding techniques, such techniques shall not be described in any detail herein. However, in preferred embodiments, the watermark embedding technique is capable of encoding a watermark codeword within the item of content 102 in a robust manner, so that the embedded watermark codeword is decodable from a watermarked item of content 110 even after various processing has been applied to the watermarked item of content 110, whether that is non-malicious processing (such as data compression for the purposes of transmitting the watermarked item of content 110 to the receiver 104 or the addition of noise/errors due to the transmission of the watermarked item of content 110 to the receiver 104) or malicious processing (in which modifications are deliberately made to the watermarked item of content 110 in order to try to make the embedded watermark codeword not decodable or, at the very least, more difficult to decode). Similarly, the analysis module 118 may decode symbols of an embedded fingerprint-code from an item of content by making use of any corresponding watermarking decoding technique, as are well-known in this field of technology. Again, we shall not describe such decoding techniques in any detail herein as the particular choice of watermark decoding technique is not essential for carrying out embodiments and the skilled person will be familiar with various watermarking decoding techniques.
The watermarked items of content 110 may be provided to the receivers 104 in a number of ways. For example, the content distributor 100 may transmit (or send or communicate) the watermarked items of content 110 to the receivers 104 via one or more networks 106, which may be one or more of the internet, wide area networks, local area networks, metropolitan area networks, wireless networks, broadcast networks, telephone networks, cable networks, satellite networks, etc. Any suitable communication protocols may be used. Additionally or alternatively, the content distributor 100 may store the watermarked items of content 110 so that the receivers 104 can contact the content distributor 100 and access watermarked items of content 110 directly from the content distributor 100. For example, the content distributor 100 could comprise a server storing the various watermarked items of content 110, and the content distributor could host a website with functionality to enable the receivers 104 to download watermarked items of content 110 from the server in such a scenario, the content distributor 100 may generate the watermarked item of content 110 for a particular receiver 104 as and when that receiver 104 contacts the website to request and download a copy of the item of content 102. The watermarked items of content 110 may be provided to the receivers 104 as a single item of data (e.g. as a downloaded item of content) or they may be streamed to the receivers 104 (such as online video or audio streaming or video and audio broadcasting). Additionally or alternatively, watermarked items of content 110 may be provided to receivers via one or more physical media, such as data stored on a CD, a DVD, a BluRay disc, etc. Hence, the particular method by which the receivers 104 are provided with watermarked items of content 110 is not important for embodiments, and this provision is therefore shown generally by dashed lines in
The receivers 104 may be any device (or client or subscriber system) capable of receiving items of content from the content distributor 100 (and to whom the content distributor 100 initially wishes to provide items of content). For example, a receiver may be a personal computer, a set-top box, a mobile telephone, a portable computer, etc. Examples of such data processing systems will be described later with reference to
The attack 112 performed to generate the forgery 114 may be any form of attack. Indeed, the forgery 114 may be an exact copy of a single watermarked item of content 110 that has been provided to a pirate 104 (so that the attack 112 may be seen as a “null” attack). However, the attack 112 may involve carrying out various processing on the watermarked item(s) of content 110 that are used to generate the forgery 114—this processing could include, for example, one or more of: data compression; addition of noise; geometric and/or temporal and/or frequency transformations; deletion/removal of parts of the watermarked items of content 110; or other such processing.
If multiple pirates 104 are involved in the generation of the forgery 114, then the attack 112 is a collusion attack. The collusion attack 112 performed using the watermarked items of content 110 that the colluding receivers (i.e. pirates 104a, 104b and 104d in
The content distributor 100 may obtain the forgery 114 in a number of ways, such as by finding the forgery 114 at a location on a network, being sent a copy by a third party, etc. The exact manner is not important for embodiments and therefore this is illustrated in
It will be appreciated that various modifications may be made to the system shown in
It will therefore be appreciated that embodiments may operate in a variety of ways with different components of the system being implemented in different ways and, potentially, by different entities (content distributor 100, receivers 104, etc.) within the system.
In the rest of this description, the following notation shall be used:
Before describing embodiments that relate to dynamic (or semi-dynamic) fingerprinting schemes (i.e. in which the system 150 makes use of the feedback 124 shown in
A first “Tardos” fingerprinting scheme (referred to in the following as TAR1) is disclosed in “Optimal Probabilistic Fingerprint-codes” (Gabor Tardos, STOC'03: Proceedings of the thirty fifth annual ACM symposium on Theory of computing, 2003, pages 116-125). The TAR1 scheme operates as follows:
(a) Let the value c≧2 be an integer representing the maximum coalition size that the fingerprinting scheme is to cater for (i.e. the maximum number of pirates 104 who can generate a forgery 114). Let ∈l∈(0,1) be a desired upper bound on the probability of incorrectly identifying a receiver 104 as being a pirate 104, i.e. a false positive probability for the fingerprinting scheme.
(b) Set the length, l, of each receiver's 104 fingerprint-code to be l=100 c2k, where k=┌ log(n/∈1)┐, so that {right arrow over (x)}j=(xj1, xj2, . . . , xjl). Set δ=1/(300c). Set Z=20ck. Set δ′=arcsin(√{square root over (δ)}) such that 0<δ′<π/4
(c) For each i=1, . . . , l choose a value pi independently from the range [δ,1−δ] according to a distribution with probability density function
(d) For each i=1, . . . , l and for each j=1, . . . , n, the i-th symbol in the fingerprint-code for the j-th receiver 104 (i.e. xj,i) is generated as an independent random variable such that P(xj,i=1)=pi and P(xj,i=0)=1−pi, i.e. the probability that xj,i assumes a first predetermined value is pi and the probability that xj,i assumes a second predetermined value is 1−pi. Such an independent random variable is often referred to as a Bernoulli random variable (or as having a Bernoulli distribution), and may be represented by the notation: Ber(pi). The values 1 and 0 are used here for the first and second predetermined symbol values, but it will be appreciated that other symbol values could be used instead.
(e) Having received a suspect-code {right arrow over (y)}, then for each j=1, . . . , n, a score Sj for the j-th receiver 104 is calculated according to
where
where g0(p)=−√{square root over (p/(1−p))} and g1(p)=√{square root over ((1−p)/p)}.
(f) For each receiver 104, identify (or accuse) that receiver 104 as being a pirate 104 if that receiver's score exceeds the threshold value Z, i.e. the j-th receiver 104 is accused of being a pirate 104 if Sj>Z.
With this scheme, the probability of incorrectly identifying a receiver 104 as being a pirate 104 (i.e. the false positive probability) is at most ∈1, whilst the probability of not managing to identify any pirates 104 at all (which can be seen as a first type of false negative probability) is at most ∈2=(∈1/n)√{square root over (c)}/4.
A modified Tardos fingerprinting scheme (referred to in the following as TAR2) is disclosed in “Symmetric Tardos Fingerprinting Codes for Arbitrary Alphabet Sizes” (Boris Skoric et al., Des. Code Cryptography, 46 (2), 2008, pages 137-166). The TAR2 scheme operates in the same way as the TAR1 scheme except that Sj,i is defined as:
With the TAR2 scheme, the length of the fingerprint-codes, l, can be 4 times smaller than that stipulated in the TAR1 scheme whilst maintaining the above false positive and false negative probabilities.
This document also considers symbol alphabets of size greater than 2 (i.e. non-binary alphabets), i.e. it discloses how Tardos-based fingerprinting schemes may be implemented using a non-binary symbol alphabet. In particular, a symbol alphabet of size q may be used, so that the symbol alphabet may be, for example {v1, . . . , vq}. Then, for each i=1, . . . , l and for each j=1, . . . , n, the i-th symbol in the fingerprint-code for the j-th receiver 104 (i.e. xj,i) may be generated as an independent random variable such that P(xj,i=v1)=pi,1, . . . , P(xj,i=vq)=pi,q for values pi,1, pi,2, . . . , pi,q, i.e. the probability that the i-th symbol assumes the k-th symbol value in the symbol alphabet is pi,k (for k=1, . . . q). For each i=1, . . . , l, the values pi,1, . . . , pi,q may be chosen for the i-th symbol position independently according to a distribution which may be, for example, a Dirichlet distribution. This paper then discusses how the scores Sj,l and the threshold Z should be adapted accordingly.
A further modified Tardos fingerprinting scheme (referred to in the following as TAR3) is disclosed in “Tardos Fingerprinting is better than we thought” (Boris Skoric et al., CoRR, abs/cs/0607131, 2006) which focuses on finding improvements for the parameters l, δ and Z
A further modified Tardos fingerprinting scheme (referred to in the following as TAR4) is disclosed in “Improved versions of Tardos' fingerprinting scheme” (Oded Blayer et al., Des. Codes Cryptography, 48, pages 79-103, 2008) which also focuses on finding improvements for the parameters l, δ and Z.
A further modified Tardos fingerprinting scheme (referred to in the following as TAR5) is disclosed in “Accusation Probabilities in Tardos Codes: the Gaussian Approximation is better than we thought” (Antonino Simone et al., Cryptology ePrint Archive, Report 2010/472, 2010).
A further modified Tardos fingerprinting scheme (referred to in the following as TAR6) is disclosed in “An Improvement of Discrete Tardos Fingerprinting Codes” (Koji Nuida et al., Designs, Codes and Cryptography, 52, pages 339-362, 2009), which focuses on optimizing l (and Z), for small predetermined values of c, by constructing different probability distributions f(p).
A further modified Tardos fingerprinting scheme (referred to in the following as TAR7) operates as follows:
(a) Let the value c≧2 be an integer representing the maximum coalition size that the fingerprinting scheme is to cater for (i.e. the maximum number of pirates 104 who can generate a forgery 114). Let ∈l∈(0,1) be a desired upper bound on the probability of incorrectly identifying a receiver 104 as being a pirate 104, i.e. a false positive probability for the fingerprinting scheme.
(b) Let dα, r, s and g be positive constants with r>1/2 and let dl, dz, dδ, dα, r, s, g and η be values satisfying the following four requirements:
where h−1 is a function mapping from (½, ∞) to (0, ∞) according to h−1(x)=(ex−1−x)/x2, and h is the inverse function mapping from (0, ∞) to (½, ∞).
(c) Set the length, l, of each receiver's 104 fingerprint-code to be l=dic2k, where k=┌ log(n/∈1)┐. Set δ=1/(dδc). Set Z=dzck. Set δ′=arcsin(√{square root over (δ)}) such that 0<δ′<π/4.
(d) For each i=1, . . . , l choose a value pi independently from the range [δ,1−δ] according to a distribution with probability density function
(e) For each i=1, . . . , l and for each j=1, . . . , n, the i-th symbol in the fingerprint-code for the j-th receiver 104 (i.e. xj,i) is generated as an independent random variable such that P(xj,i=1)=pi and P(xj,i=0)=1−pi, i.e. the probability that xj,i assumes a first predetermined value is pi and the probability that xj,i assumes a second predetermined value is 1−pi. Again, the values 1 and 0 are used here for the first and second predetermined symbol values, but it will be appreciated that other symbol values could be used instead.
(f) Having received a suspect-code {right arrow over (y)}, then for each j=1, . . . , n, a score Sj for the j-th receiver 104 is calculated according to
where
where g0(p)=−√{square root over (p/(1−p))} and g1(p)=√{square root over ((1−p)/p)}.
(g) For each receiver 104, identify (or accuse) that receiver 104 as being a pirate 104 if that receiver's score exceeds the threshold value Z, i.e. the j-th receiver 104 is accused of being a pirate 104 if Sj>Z.
With this scheme, the probability of incorrectly identifying a receiver 104 as being a pirate (i.e. the false positive probability) is at most ∈1, whilst the probability of not managing to identify any pirates 104 at all (i.e. the first type of false negative probability) is again at most ∈2. The mathematical proofs of these false positive and false negative results are provided in chapters 8.3 and 8.4 of the appendix at the end of this description (which form part of a thesis “Collusion-resistant traitor tracing schemes” by Thijs Martinus Maria Laarhoven, to be submitted to the Department of Mathematics and Computer Science, University of Technology, Eindhoven).
Other static probabilistic fingerprinting schemes exist (with binary and/or non-binary symbol alphabets), such as the one described in “Collusion-Secure Fingerprinting for Digital Data” (Dan Boneh et al., IEEE Transactions on Information Theory, pages 452-465, 1998)—referred to below as BER1.
All of these static probabilistic fingerprinting schemes operate as follows. For each receiver 104, the symbols for the entire fingerprint-code for that receiver are generated and the receiver 104 is then provided with the entire fingerprint-code (as has been described above with reference to
As mentioned above, embodiments provide probabilistic fingerprinting schemes that make use of the feedback loop 124 of the system 150 illustrated in
The method 200 maintains a set of “active” receivers 104 (or “connected” receivers 104). The content distributor 100 may, for example, associate an active-flag with each receiver 104, where the active-flag for a receiver 104 indicates whether that receiver 104 is “active” or “inactive” (or “connected” or “disconnected”)—these active-flags may be updated when a receiver 104 is changed from being an active receiver 104 to an inactive receiver 104. Alternatively, the content distributor 100 may maintain a list of identifiers of receivers 104 who are considered to be in the set of active receivers 104—this list may be modified by removing an identifier of a receiver 104 when that receiver 104 changes from being an active receiver 104 to an inactive receiver 104. A receiver 104 is an active receiver if a copy of an item of content 102 should (or may) be distributed to that receiver 104; likewise, a receiver 104 is an inactive receiver 104 if an item of content 102 should (or may) not be distributed to that receiver 104. Thus, the set of active receivers is the collection of receivers 104, out of the entire population of n receivers 104, to whom items of content 102 should (or may) be distributed. As items of content 102 are to be distributed to a receiver 104 with one or more symbols of that receiver's 104 fingerprint-code embedded or contained therein, a receiver 104 may be viewed as an active receiver 104 if further symbols of the fingerprint-code for that receiver 104 should be distributed to that receiver 104; likewise, a receiver 104 may be viewed as an inactive receiver 104 if further symbols of the fingerprint-code for that receiver 104 should not be distributed to that receiver 104.
For the first round in the method 200, the set of active receivers 104 comprises all of the n receivers 104 in the population of receivers 104 of the system 150 (or at least all of those receivers 104 to whom the content distributor 100 initially wishes to provide items of content 102). However, as will be described shortly, during one or more rounds, one or more receivers 104 may be identified as being a pirate 104. When this happens, those identified pirates 104 are removed from the set of active receivers 104, i.e. the set of active receivers 104 is updated by removing any identified colluding receivers 104 from the set of active receivers 104. This may therefore be seen as de-activating receivers 104 that were initially active receivers 104. The content distributor 100 then no-longer provides items of content 102 to de-activated receivers 104 (i.e. to receivers 104 who are no longer in the set of active receivers 104).
The processing for each round is set out below.
At the step S202, each receiver 104 in the set of active receivers 104 is provided with one or more symbols as a (next) part of the fingerprint-code for the receiver 104. This part of the fingerprint-code corresponds to the current round—for the first round, the symbols provided to an active receiver 104 form a first/initial part of the fingerprint-code for that receiver 104; for subsequent rounds, the symbols provided to an active receiver 104 form a corresponding subsequent part of the same fingerprint-code for that receiver 104. Thus, the part of the fingerprint-code provided may be seen as a portion or subset of the fingerprint-code for the receiver 104 for the current round. In particular, if the symbols of the fingerprint-code {right arrow over (x)}j that have been provided so far in previous rounds to the j-th receiver 104 are xj,1, xj,2, . . . xj,r, then at the step S202, the j-th receiver 104 is provided with further symbols xj, (r+1), . . . , xj,(r+w) for some positive integer w as the next part of that fingerprint-code {right arrow over (x)}j. Preferably, the number of symbols (i.e. w) is 1, as this means that the set of active receivers 104 can be updated more frequently for a given number of fingerprint-code symbols (e.g. an update for every fingerprint-code symbol position instead of an update for, say, every set of 10 fingerprint-code symbol positions)—this can thereby lead to earlier identification, and de-activation, of pirates 104. However, embodiments may make use of values of w greater than 1. The value of w may be a predetermined constant. However, in some embodiments, the value of w may change from one round to another round—this could be desirable because the watermark embedding module 108 may be able to embed a first number w1 of fingerprint symbols in a first item of content 102 during one round and may be able to embed a second number w2 of fingerprint symbols in a second item of content 102 during a subsequent round, where w1 is different from w2. This could be, for example, due to differing data sizes of the items of content 102.
As set out above, there are a number of ways in which these symbols may be provided to the receivers 104. Essentially, though, at the step S202, each receiver 104 is provided with a version of a source item of content (e.g. a watermarked item of content 110). The source item of content 102 corresponds to the current round—for example, the source item of content 102 may be the next number of video frames forming a part of a movie being provided to receivers 104—and the version of the source item of content 110 provided to a receiver 104 corresponds to the next part of the fingerprint-code for that receiver 104 (e.g. the watermarked item of content 110 has embedded therein the next w symbols for the fingerprint-code for the receiver 104). For example, in the case where w is 1, the content distributor 100 may generate two versions of a source item of content, a first version having embedded therein 0 as a fingerprint symbol and a second version having embedded therein 1 as a fingerprint symbol. If the next symbol for the fingerprint-code for a particular active receiver 104 is a 0, then the content distributor 100 provides the first version to that receiver 104; if the next symbol for the fingerprint-code for a particular active receiver 104 is a 1, then the content distributor 100 provides the second version to that receiver 104.
The content distributor 100 may use its encoding module 108 to generate an entire fingerprint-code for a receiver 104—the content distributor 100 may then store this generated fingerprint-code in the memory 126 (e.g. as part of the data 122) for later use. The content distributor 100 may then select and provide one or more symbols from that stored fingerprint-code to the corresponding receiver 104. Alternatively, the content distributor 100 may use its encoding module 108 to generate symbols for a part of a fingerprint-code for a receiver 104 as and when those symbols are needed (such as when the item of content 102 is to be provided to the receiver 104)—in this way, the content distributor 100 does not need to store an entire fingerprint-code for a receiver 104 in its memory 126. With this method, a fingerprint-code for a receiver 104 is generated in parts and “grows” as further symbols for the fingerprint-code are generated when required.
The generated symbols may then be provided to the receiver 104 in a number of ways. For example, the content distributor 100 could using its watermark encoding module 108 to embed the symbols as watermark payload data into an item of content 102 that is to be provided to receivers 104 for the current round, and then provide the watermarked item of content 110 to the receiver 104. Alternatively, the content distributor 100 could provide the symbols and the item of content 102 for the current round to the receiver 104, preferably in a secure manner such as an encrypted package, wherein the receiver 104 is arranged to embed the received symbols in the received item of content 102 (such as at the time the receiver 104 decrypts the received package or otherwise accesses the received item of content 102).
Alternatively, as mentioned above, the generation of the symbols forming a part of a fingerprint-code for a receiver 104 may be performed by the receiver 104. For example, the content distributor 100 may provide the item of content 102 to the receiver 104, preferably in a secure manner such as an encrypted package, wherein the receiver 104 is arranged to generate the fingerprint symbols and embed the received symbols in the received item of content 102 (such as at the time the receiver 104 decrypts the received package or otherwise accesses the received item of content 102).
In summary, then, the step S202 involves providing to each receiver 104 in the set of active receivers 104 one or more symbols xj,i as a (next) part of the fingerprint-code {right arrow over (x)}j for that receiver 104. Method for generating the particular one or more symbols xj,i shall be described shortly.
At the step S204, the content distributor 100 obtains one or more symbols forming a corresponding part of the suspect-code {right arrow over (y)} (or a suspect fingerprint-code). In particular, if the symbols of the fingerprint-codes {right arrow over (x)}j that have been provided so far (in any previous rounds and the current round of the method 200) to currently active receivers 104 are made up of r symbols xj,1, xj,2, . . . , xj,r, then the content distributor 100 will, at the end of the step S204 have obtained a corresponding suspect-code {right arrow over (y)} with symbols y1, y2, . . . , yr. In particular, for each symbol of the fingerprint-codes {right arrow over (x)}j that have been provided so far (in any previous rounds and the current round of the method 200) to currently active receivers 104, there is a corresponding symbol in the suspect-code. Therefore if, at the step S202 of the current round, the method 200 provided the currently active receivers 104 with w respective symbols xj,(r−w+1), . . . , xj,r, forming a part of the respective fingerprint-codes {right arrow over (x)}j, then at the step S204 the content distributor obtains w corresponding symbols y(r−w+1), . . . , yr as a corresponding part to add to, or extend, the suspect-code {right arrow over (y)}.
In particular, the content distributor 100 may receive a forgery 114 and use the analysis module 118 to decode a watermark payload from the forgery. The decoded watermark payload is a sequence of one or more fingerprint symbols that corresponds to (or is represented by) the received forgery 114. This processing is, essentially, an inverse operation to the method by which the encoding module generates a version 110 of a source item of content 102 that corresponds to a sequence of one or more fingerprint symbols. Hence, this watermark payload comprises the next symbols y(r−w+1), . . . , yr that form the next part of the suspect-code {right arrow over (y)}. In this way, these symbols of the suspect-code may be obtained, or received, as a code embedded, or encoded, within the forgery 114. Moreover, the suspect-code {right arrow over (y)} itself may be formed from, and grow out of, symbols obtained from a series of forgeries 114 that the content distributor 100 obtains or receives over the series of rounds.
At a step S206, the content distributor 100 uses the analysis module 118 to analyse the suspect-code {right arrow over (y)}. The analysis module 118 determines a likelihood (or an indication of a likelihood) that the suspect-code Y has been formed using one or more of the symbols xj,1, xj,2, . . . , xj,r that have been provided to the j-th receiver 104 so far. In otherwords, the analysis module 118 determines a likelihood (or an indication of a likelihood) that the j-th receiver 104 is a pirate 104, i.e. that one or more watermarked item of contents 110 provided to the j-th receiver 104 have been used, somehow, to generate one or more forgeries 114. This is done for each receiver 104 in the set of active receivers 104.
In particular, for each receiver 104 in the current set of active receivers 104, the analysis module 118 may maintain a corresponding score (represented by Sj for the j-th receiver 104 in the following) that indicates a likelihood that the suspect-code {right arrow over (y)} has been formed using one or more of the symbols that have been provided to that receiver 104 so far. The score, Sj, for the j-th receiver 104 thus represents a likelihood that the j-th receiver 104 is a colluding-receiver, where a colluding-receiver is a receiver 104 that has been provided with a version of a source item of content 110 that has been used to generate one of the forgeries 114 that have been received so far during the method 200. The analysis module 118, at the step S206, updates the score Sj for each currently active receiver 104 based on the fingerprint-codes for the active receivers 104 and the suspect-code (and, in particular, on the newly obtained symbols y(r−w+1), . . . , yr for the suspect-code). For example, if a newly obtained symbol yi matches the symbol xj,i of the fingerprint-code for the j-th receiver 104, then the score Sj for the j-th receiver may be incremented; if the newly obtained symbol yi does not match the symbol xj,i of the fingerprint-code for the j-th receiver 104, then the score Sj for the j-th receiver may be decremented. Particular methods for calculating and updating the scores Sj shall be described shortly.
The scores Sj may be stored in the memory 126 of the content distributor 100 as a part of the data 122.
The scores Sj are initialized (e.g. to a value of 0) at the very beginning of the method 200 (i.e. before any of the rounds are started) and are updated/modified each round, i.e. they are not re-initialized when a new round commences or when pirates 104 that have been identified are deactivated.
At the step S208, the analysis module 118 uses these updated scores Sj, or likelihoods, to try to identify one or more pirates 104. In particular, for each receiver 104 in the set of active receivers 104, that receiver's 104 score Sj is compared to a threshold Z and if the score Sj exceeds that threshold Z, then that receiver 104 is identified as being a pirate 104. The analysis module 118 then updates the set of active receivers 104 by removing any identified pirates 104 from the set of active receivers 104—i.e. any identified pirates 104 are de-activated, or disconnected, as described above. Thus, any identified pirates 104 are not provided with further fingerprint symbols in subsequent rounds of the method 200, i.e. any identified pirates 104 are not provided with further versions of items of content 110 in subsequent rounds of the method 200.
The threshold is set such that the probability of incorrectly identifying any innocent (non-colluding) receiver 104 as actually being a pirate (i.e. the false positive probability) is at most a predetermined probability. In other words, the threshold is set such that the probability that the current suspect-code {right arrow over (y)} was not actually formed using one or more symbols that have been provided to a receiver 104 who has a score exceeding the threshold is at most the predetermined probability.
At the step S210, it is determined whether or not to terminate the method 200. The method 200 may be terminated if all of the different items of content 102 that are to be distributed have been distributed. Additionally or alternatively, the method 200 may be terminated if at least a predetermined number of symbols xj,i have been provided to the active receivers 104 at the step S202—this predetermined number could be a calculated length I for the fingerprint-codes. Additionally or alternatively, the method 200 may be terminated if at least a predetermined number of pirates 104 have been identified at the step S206 across the rounds that have been carried out so far. Additionally or alternatively it is possible that the step S204 may fail to obtain further symbols for the suspect code (for example, a watermark decoding operation may indicate that it has failed to successfully decode a watermark payload from a forgery 114)—in this case, the processing for the current round may skip the steps S206 and S208 (as indicated by a dashed arrow connecting the steps S204 and S210 in
If it is determined that the method is to continue, then the processing returns to the step S202. The method 200 therefore commences a next round, which is carried out based on the current set of active receivers 104 (which might have been updated at the step S208 of the previous round) and which involves providing those active receivers 104 with a further/new item of content 102 for the next round (and hence one or more further symbols as a further/next part of the fingerprint-codes for those receivers 104).
For example, the content distributor 100 may be able to embed 1 symbol xj,i in a single frame of video. The content distributor 100 receives corresponding frames of video as forgeries 114 produced by the coalition of pirates 104. In this case, each item of content 102 would correspond to a frame of video and the value of w would be 1, so that total amount of a receiver's fingerprint-code that has been provided to the receiver 104 grows by 1 symbol for each round of the method 200. Then, for every frame of pirate video received, the content distributor 100 carries out an analysis to try to identify one or more of the pirates 104 that are generating the pirate version if any are identified, then they are deactivated. This leaves the coalition of pirates 104 with fewer members actually receiving copies of the video content eventually, further pirates 104 will be identified until no more pirates 104 remain active.
As another example, the content distributor may be able to embed 2 symbols xj,i in a single frame of video, but may only be able to carry out a decoding operation once for every 5 frames of a received forgery 114. In this case, the each item of content 102 would correspond to a 5 frames of video and the value of w would be 10, so that total amount of a receiver's fingerprint-code that has been provided to the receiver 104 grows by 10 symbols for each round of the method 200. Then, for every 5 frames of video of a pirate version of the distributed video, the content distributor 100 carries out an analysis to try to identify one or more of the pirates 104 that are generating the pirate version if any are identified, then they are deactivated. This leaves the coalition of pirates 104 with fewer members actually receiving copies of the video content—eventually, further pirates 104 will be identified until no more pirates 104 remain active.
It will be appreciated that if, at the step S206, no pirates 104 are identified (or it is determined that no further pirates 104 have been implicated), then the step S208 may be skipped over (as there are no additional pirates 104 to remove from the set of active receivers 104). This is illustrated in
It will be appreciated that, at any stage during the method 200, one or more new receivers 104 may be added to the set of active receivers 104 (for example when a new subscriber joins a content distribution system).
At a first round, a first part of the fingerprint-code {right arrow over (x)}j for the j-th receiver 104j is embedded as a watermark into a first item of content 102-1. This first part of the fingerprint-code {right arrow over (x)}j is made up of w1 symbols (xj,1, . . . , xj,w1). The resulting watermarked item of content 110-1x is provided to the j-th receiver 104j, thereby providing that j-th receiver 104j with the set of w1 symbols (xj,1, . . . , xj,w1). When the content distributor 100 obtains a forgery 114-1 of the first item of content 102-1, then corresponding symbols (y1, . . . , yw1) of the suspect-code are obtained by performing a watermark decoding operation on the received forgery 114-1. The score Sj for the j-th receiver 104 is then updated to indicate a likelihood that the j-th receiver 104j is a pirate and the watermarked item of content 110-1x was used to create the forgery 114-1. In other words, the score Sj for the j-th receiver 104 is updated to indicate a likelihood that the suspect-code {right arrow over (y)}=(y1, . . . , yw1) has been formed using one or more of the symbols (xj,1, . . . , xj,w1) that were provided to the j-th receiver 104j. If Sj exceeds a threshold, then the j-th receiver 104j is de-activated; otherwise, processing continues to the second round. The above is carried out for each currently active receiver 104.
At the second round, a second part of the fingerprint-code {right arrow over (x)}j for the j-th receiver 104j is embedded as a watermark into a second item of content 102-2. This second part of the fingerprint-code {right arrow over (x)}j is made up of w2 symbols (xj, (w1+1), . . . , xj,(w1+w2)). The resulting watermarked item of content 110-2y is provided to the j-th receiver 104j, thereby providing that j-th receiver 104j with the set of w2 symbols (xj,(w1+1), . . . , xj,(w1+w2)). When the content distributor 100 obtains a forgery 114-2 of the second item of content 102-2, then corresponding symbols (y(w1+1), . . . , y(w1+w2)) of the suspect-code are obtained by performing a watermark decoding operation on the received forgery 114-2. The score Sj for the j-th receiver 104 is then updated to indicate a likelihood that the j-th receiver 104j is a pirate and one or more of the watermarked items of content 110-1x, 110-2y were used to create one or more of the forgeries 114-1, 114-2. In other words, the score Sj for the j-th receiver 104 is updated to indicate a likelihood that the suspect-code {right arrow over (y)}=(y1, . . . , yw1+w2) has been formed using one or more of the symbols (xj,1, . . . , xj,(w1+w2) that were provided to the j-th receiver 104j. This may involve taking the current score Sj for the j-th receiver 104j and adding to that current score Sj a value that results from a comparison (or processing) of the w2 received symbols (y(w1+1), . . . , y(w1+w2)) and the w2 symbols (xj,(w1+1), . . . , xj,(w1+w2)) of the fingerprint-code for the j-th receiver 104j, thereby obtaining a new score Sj. If Sj exceeds a threshold, then the j-th receiver 104j is de-activated; otherwise, processing continues to the third round. The above is carried out for each currently active receiver 104.
At the third round, a third part of the fingerprint-code {right arrow over (x)}j for the j-th receiver 104j is embedded as a watermark into a third item of content 102-3. This third part of the fingerprint-code {right arrow over (x)}j is made up of w3 symbols (xj,(w1+w2+1), . . . , xj,(w1+w2+w3)). The resulting watermarked item of content 110-3z is provided to the j-th receiver 104, thereby providing that j-th receiver 104j with the set of w3 symbols (xj,(w1+w2+1), . . . , xj,(w1+w2+w3)). When the content distributor 100 obtains a forgery 114-3 of the third item of content 102-3, then corresponding symbols (y(w1+w2+1), . . . , y(w1+w2+w3)) of the suspect-code are obtained by performing a watermark decoding operation on the received forgery 114-3. The score Sj for the j-th receiver 104 is then updated to indicate a likelihood that the j-th receiver 104 is a pirate and one or more of the watermarked items of content 110-1x, 110-2y, 110-3z were used to create one or more of the forgeries 114-1, 114-2, 114-3. In other words, the score Sj for the j-th receiver 104j is updated to indicate a likelihood that the suspect-code {right arrow over (y)}=(y1, . . . , yw1+w2+w3) has been formed using one or more of the symbols (xj,1, . . . , xj,(w1+w2+w3) that were provided to the j-th receiver 104j. This may involve taking the current score Sj for the j-th receiver 104j and adding to that current score Sj a value that results from a comparison (or processing) of the w3 received symbols (y(w1+w2+1), . . . , y(w1+w2+w3)) and the w3 symbols (xj,(w1+w2+1), . . . , xj,(w1+w2+w3)) of the fingerprint-code for the j-th receiver 104j, thereby obtaining a new score Sj. If Sj exceeds a threshold, then the j-th receiver 104j is de-activated; otherwise, processing continues to the fourth round (not shown). The above is carried out for each currently active receiver 104.
In one embodiment of the invention, the symbols for the fingerprint-codes are generated for use in (or provision at) the step S202 as follows:
(a) Let the value c≧2 be an integer representing the maximum coalition size that the fingerprinting scheme is to cater for (i.e. the maximum number of pirates 104 who can generate a forgery 114). Let ∈l∈(0,1) be a desired upper bound on the probability of incorrectly identifying a receiver 104 as being a pirate 104, i.e. a false positive probability for the fingerprinting scheme.
(b) Set k=┌ log(2n/∈1)┐. Let dα, r, s and g be positive constants with r>1/2 and let dl, dz, dδ, dα, r, s, g and η be values satisfying the following four requirements:
where h−1 is a function mapping from (½, ∞) to (0, ∞) according to h−1(x)=(ex−1−x)/x2, and h is the inverse function mapping from (0, ∞) to (½, ∞).
(c) Set the length, l, of each receiver's 104 fingerprint-code to be l=dic2k. Set δ=1/(dδc). Set Z=dzck. Set δ′=arcsin(√{square root over (δ)}) such that 0<δ′<π/4.
(d) For each j=1, . . . , n, the i-th symbol in the fingerprint-code for the j-th receiver 104 (i.e. xj,i) is generated as an independent random variable such that P(xj,i=1)=pi and P(xj,i=0)=1−p, i.e. the probability that xj,i assumes a first predetermined value is pi and the probability that xj,i assumes a second predetermined value is 1−pi. Here, the value pi is chosen independently from the range [δ,1−δ] according to a distribution with probability density function
Again, the values 1 and 0 are used here for the first and second predetermined symbol values, but it will be appreciated that other symbol values could be used instead.
With this embodiment, the analysis module 118 operates at the steps S204 and S206 as follows. As an initialisation step (not shown in
where g0(p)=−√{square root over (p/(1−p))} and g1(p)=√{square root over ((1−p)/p)}.
This is done for each active receiver 104 and each symbol yi of the suspect-code received at the step S204 of the current round.
The threshold used at the step S206 is the value Z, so that if, at any round of the method 200 a receiver's 104 score exceeds Z, then that receiver 104 is identified (or accused) of being a pirate 104.
The method 200 may terminate at the step S210 when l symbols of the fingerprint-code for a receiver 104 have been sent to that receiver 104 (i.e. so that the maximum length of fingerprint-code sent to a receiver 104 is l). However, at the step S210, the method 200 may terminate when c pirates 104 have been identified. Hence, it is possible to terminate the method 200 without actually having to distribute I fingerprint symbols to the receivers 104.
This embodiment essentially takes the above TAR7 static probabilistic fingerprinting scheme and adapts it so as to form a more dynamic probabilistic fingerprinting scheme—the similarity between the various equations and conditions is apparent, except that a number of modifications are present in order to be able to make the transition from TAR7's static nature to the more dynamic nature of embodiments.
With this embodiment, the probability of incorrectly identifying a receiver 104 as being a pirate (i.e. the false positive probability) is at most ∈1, whilst the probability of not managing to identify all pirates 104 (i.e. a second type of false negative probability, stronger than the above first type of false positive probability) is again at most ∈2. This applies when the value of w is 1, i.e. when a single symbol is provided/encoded at the step S202 and a single corresponding symbol is received at the step S204 for each round of the method 200. For other values of w, the false positive probability lies between ∈1 and the corresponding false positive probability for the TAR7 scheme mentioned above (which can be half the size of ∈1 of the present embodiment). The mathematical proofs of these false positive and false negative results are provided in chapters 9.3 and 9.4 of the appendix at the end of this description (which form part of a thesis “Collusion-resistant traitor tracing schemes” by Thijs Martinus Maria Laarhoven, to be submitted to the Department of Mathematics and Computer Science, University of Technology, Eindhoven).
As mentioned above, this embodiment essentially takes the above TAR7 static probabilistic fingerprinting scheme and adapts it so as to form a more dynamic probabilistic fingerprinting scheme. It will be appreciated that, in embodiments, any of the other Tardos-based static probabilistic fingerprinting schemes TAR1-TAR6 (or indeed any others) could be used instead of TAR7 as the basis for forming a more dynamic probabilistic fingerprinting scheme, with their various parameters/settings/thresholds being modified so as to achieve the desired false positive and false negative probabilities in the dynamic scheme. Indeed, other non-Tardos-based static probabilistic fingerprinting schemes, such as the BER1 scheme, could be used in embodiments as the basis for forming a more dynamic probabilistic fingerprinting scheme. Additionally, embodiments may make use of binary or non-binary symbols alphabets (as discussed above, for example, with reference to the TAR2 scheme).
The threshold value Z used at the step S208 may remain constant throughout the processing of the method 200. However, in some embodiments, the value Z may be updated to cater for the fact that one or more pirates 104 have been identified. For example, when one or more pirates 104 are identified and de-activated, the threshold may be decreased to account for the fact that the set of active receivers 104 is now reduced.
One observation of note is that the above-mentioned embodiments, and the previously-mentioned static probabilistic fingerprinting schemes, use the value c (the maximum coalition size that the fingerprinting scheme is to cater for) in order to set up the various parameters and to generate the symbols of the fingerprint-codes. For example, in the TAR7 static fingerprinting scheme and in the above-described embodiment of the invention, each symbol xj,i is taken from a probability distribution with a probability density function that is dependent on the value of pi, which is dependent on the value of δ′, which is dependent on the value of δ, which is dependent on the value of c.
In preferred embodiments, each symbol for each fingerprint-code is generated independent of an expected number of colluding-receivers (i.e. independent of the value of c).
In order to remove the dependency on c, the above-mentioned embodiment of the invention (which is based on the TAR7 static probabilistic scheme) may be modified so that the value of each pi is chosen independently from the range (0,1) according to a distribution with probability density function
This removes the dependency of each symbol xj,i on the value of c. The same (or similar) can be done for other embodiments, such as embodiments which are based on the other Tardos-style static fingerprinting schemes TAR1-TAR6. A similar approach can be used for embodiments which are based on BER1 scheme or that are based on other types of static fingerprinting schemes.
With such schemes that are independent of c, the method of updating the score Sj for the active receivers 104 should be modified to account for the fact that the symbols of the fingerprint-codes provided to the receivers 104 are now taken from a different distribution (that is not dependent on c). For example, in the Tardos-based schemes, each pi was taken from the range [δ,1−δ] for some value δ (dependent on c) whereas now each pi is taken from the range (0,1). Thus, a value for pi may be valid for a particular collusion size c, i.e. whilst having been taken from the range (0,1), pi happens to lie in the range [δ,1−δ] appropriate to that collusion size c. Conversely, a value for pi may be invalid for a particular collusion size c, i.e. having been taken from the range (0,1), the value of pi happens to lie outside of the range [δ,1−δ] appropriate to that collusion size c. In this way, the symbols generated for the i-th position in the fingerprint-codes provided to the receivers 104 may be valid for certain collusion sizes and may be invalid for other collusion sizes. In the above example, symbols generated for the i-th position of the fingerprint-codes are valid for a collusion-size of c if pi lies in the range δ≦pi≦(1−δ), where δ=1/(dδc); otherwise, they are invalid. However, for other embodiments, it will be appreciated that other criteria will apply as to when symbols for the i-th position of the fingerprint-codes are valid or are not valid for a particular collusion-size of c.
Therefore, in preferred embodiments, the symbols for the fingerprint-codes for the step S202 are generated independent of any collusion-size c (as set out above). The step S206 involves maintaining, for each receiver 104j and for one or more expected (maximum) collusion sizes c1, c2, . . . , ct, a corresponding score S′j,c1, S′j,c2, . . . , S′j,ct. The score S′j,c indicates a likelihood that the j-th receiver 104 is a colluding-receiver under the assumption that the number of colluding-receivers is of size (at most) c. For each of the collusion-sizes c1, c2, . . . , ct, there is a respective threshold Zc1, Zc2, . . . , Zct corresponding to the collusion-size. These respective thresholds are set as discussed above to ensure that a receiver 104 that is not a colluding-receiver 104 will only have a score (i.e. one or more of S′j,c1, S′j,c2, . . . , S′j,ct) that exceeds its corresponding threshold with at most the predetermined (desired) false positive probability ∈1. The step S206 comprises comparing each of the scores S′j,1, S′j,2, . . . , S′j,ct with the corresponding threshold Zc1, Zc2, . . . , Zct and identifying the j-th receiver 104 as a colluding-user if one or more of the scores S′j,1, S′j,2, . . . , S′j,ct exceeds the corresponding threshold Zc1, Zc2, . . . , Zct.
To cater for the fact that the generation of the fingerprint-symbols for the i-th position in the fingerprint-codes may not be valid for a particular collusion-size, the step S206 only updates a score S′j,c based on a received i-th symbol yi of the suspect-code and the i-th fingerprint-symbol xj,i if symbols at the i-th position of the fingerprint-codes are valid for that collusion size c. In other words, updating the score S′j,c for a collusion-size c comprises disregarding a symbol yi obtained for the i-th position of the suspect-code if symbols generated for the i-th position of the fingerprint-codes are invalid for that collusion-size c.
In this way, a plurality of fingerprinting schemes (catering for different collusion sizes) may effectively be run in parallel—however, the same fingerprint-codes are supplied to the receivers 104. In other words, these embodiments enable coalitions of arbitrary size to be catered for, this being done without having to generate and supply to receivers 104 different fingerprint-codes that are specifically intended for respectively different collusion-sizes.
The system 400 comprises a computer 402. The computer 402 comprises: a storage medium 404, a memory 406, a processor 408, a storage medium interface 410, an output interface 412, an input interface 414 and a network interface 416, which are all linked together over one or more communication buses 418.
The storage medium 404 may be any form of non-volatile data storage device such as one or more of a hard disk drive, a magnetic disc, an optical disc, a ROM, etc. The storage medium 404 may store an operating system for the processor 408 to execute in order for the computer 402 to function. The storage medium 404 may also store one or more computer programs (or software or instructions or code) that form part of an embodiment of the invention.
The memory 406 may be any random access memory (storage unit or volatile storage medium) suitable for storing data and/or computer programs (or software or instructions or code) that form part of an embodiment of the invention.
The processor 408 may be any data processing unit suitable for executing one or more computer programs (such as those stored on the storage medium 404 and/or in the memory 406) which have instructions that, when executed by the processor 408, cause the processor 408 to carry out a method according to an embodiment of the invention and configure the system 400 to be a system according to an embodiment of the invention. The processor 408 may comprise a single data processing unit or multiple data processing units operating in parallel, in cooperation with each other, or independently of each other. The processor 408, in carrying out data processing operations for embodiments, may store data to and/or read data from the storage medium 404 and/or the memory 406.
The storage medium interface 410 may be any unit for providing an interface to a data storage device 422 external to, or removable from, the computer 402. The data storage device 422 may be, for example, one or more of an optical disc, a magnetic disc, a solid-state-storage device, etc. The storage medium interface 410 may therefore read data from, or write data to, the data storage device 422 in accordance with one or more commands that it receives from the processor 408.
The input interface 414 is arranged to receive one or more inputs to the system 400. For example, the input may comprise input received from a user, or operator, of the system 400; the input may comprise input received from a device external to or forming part of the system 400. A user may provide input via one or more input devices of the system 400, such as a mouse (or other pointing device) 426 and/or a keyboard 424, that are connected to, or in communication with, the input interface 414. However, it will be appreciated that the user may provide input to the computer 402 via one or more additional or alternative input devices. The system may comprise a microphone 425 (or other audio transceiver or audio input device) connected to, or in communication with, the input interface 414, the microphone 425 being capable of providing a signal to the input interface 414 that represents audio data (or an audio signal). The computer 402 may store the input received from the/each input device 424, 425, 426 via the input interface 414 in the memory 406 for the processor 408 to subsequently access and process, or may pass it straight to the processor 408, so that the processor 408 can respond to the input accordingly.
The output interface 412 may be arranged to provide a graphical/visual output to a user, or operator, of the system 400. As such, the processor 408 may be arranged to instruct the output interface 412 to form an image/video signal representing a desired graphical output, and to provide this signal to a monitor (or screen or display unit) 420 of the system 400 that is connected to the output interface 412. Additionally, or alternatively, the output interface 412 may be arranged to provide an audio output to a user, or operator, of the system 400. As such, the processor 408 may be arranged to instruct the output interface 412 to form an audio signal representing a desired audio output, and to provide this signal to one or more speakers 421 of the system 400 that is/are connected to the output interface 412.
For example, when the system 400 is a receiver 104, the output interface 412 may output to an operator a representation of a watermarked item of content 110 that has been received by the receiver 104.
Finally, the network interface 416 provides functionality for the computer 402 to download data from and/or upload data to one or more data communication networks (such as the Internet or a local area network).
The following provides various mathematical analysis and proofs in support of the above-mentioned embodiments. In these sections:
[BT08] refers to document “Improved versions of Tardos' fingerprinting scheme” (Oded Blayer et al., Des. Codes Cryptography, 48, pages 79-103, 2008).
Let h−1:(½, ∞)→(0, ∞) be defined by h−1(x)=(ex−1−x)/x2. Let h: (0, ∞)→(½, ∞) denote its inverse function, so that ex≦1+x+λx2 if and only if x≦h(λ). Let d∞, r, s, g be positive constants with r>½ and let dl, dz, dδ, d∞, r, s, g, η satisfy the following four requirements.
Let the Tardos scheme be constructed as below.
1. Initialization
(a) Take l=dlc2k as the code length, and take the parameters δ and Z as δ=1/(dδc) and Z=dzck. Compute δi=arcsin(√δ) such that 0<δi<π/4.
(b) For each fingerprint position i∈[l], choose pi independently from the distribution given by the following cumulative distribution function F:
The probability destiny function f of this distribution is given by:
This function is biased towards δ and 1−δ and symmetric around ½.
2. Codeword Generation
(a) For each position i∈[l] and for each user j∈[n], select the entry Xji of the code matrix X independently by Xji˜Ber(pi).
3. Accusation
(a) For each position i∈[l] and for each user j∈[n], calculate the score Sji according to:
(b) For each user j∈[n], calculate the total accusation sum Sj=Σli=1Sji. User j is accused if and only if Sj>Z.
Then the following properties hold.
Theorem 8.1 (Soundness). Let j∈U be an arbitrary user, and let C⊂U\{j} be a coalition of any size not containing j. Let p be some pirate strategy employed by this coalition. Then
[j∈σ(ρ(X))]<∈1/n.
Therefore the probability of accusing at least one innocent user is at most ∈1.
Theorem 8.2 (Completeness). Let C⊂U be a coalition of size at most c, and let ρ be any pirate strategy employed by this coalition. Then
[C∩σ(ρ(X))=ø]<∈2
Therefore the probability of not accusing any guilty users is at most ∈2.
In the following two Sections we will prove the soundness and completeness properties. The proofs are both very similar to the proofs in [BT08], except for some small adjustments to incorporate the symbol-symmetric assucation function. Then in Section 8.5 we will look at the asymptotic behavior of the scheme, as c goes to infinity. In Section 8.6 we then give results similar to those in [BT08, Section 2.4.5] on how to find the optimal parameters dδ, d∝, dz, dl, given the parameters r, s, g. Finally in Section 8.7 we use these formulas to calculate optimal parameters numerically, for different values of c and η.
We will prove the soundness property as stated in Theorem 8.1 under the assumptions on the parameters stated earlier. For this proof we will only use the first two assumptions.
Proof of Theorem 8.1. We want to prove that the probability of accusing any particular innocent user is a most ∈1/η. Since a user is accused if and only if his score Sj exceeds Z, we therefore need to prove that [Sj>Z]≦∈1/n for innocent users j.
First of all, we use ∝=1/(d∝c) and the Markov inequality to obtain
[Sj>Z]=
[eαS
[eαS
Next we fill in Sj=Σli=1Sji to get
Since Sji<√{square root over (1/δ)}=√{square root over (dδc)} it follows that ∝Sji<√{square root over (dδ)}/(d∝√{square root over (C)}). From Requirement (R1) we know that √{square root over (dδ)}/(d4√{square root over (c)})≦h(r) for our choice of r, hence ∝Sji<h(r). From the definition of h we know that ew≦1+w+rw2 exactly when w≦h(r). Using this with w=∝Sji we get
[eαS
[1+αSji+r(αSji)2]=1+α
[Sji]+rα2
[Sji2].
We can easily calculate [Sji] and [S2ji], as yi and Xji are independent for innocent users j. Writing qi=√{square root over ((1−pi)/pi)}, if yi=0, then with the probability pi we have Xji=1 and Sji=−qi, while with the probability 1−pi we have Xji=0 and Sji=1/qi. Similarly, if yi=1, then with probability pi we have Xji=1 and Sji=−qi, while with probability 1−pi we have Xji=0 and Sji=−1/qi. Similarly, So we get
So both expectation values are the same for each value of yi and we get
[Sji]=0 (8.2)
[Sji2]=1. (8.3)
So if follows that
[eαS
Using all the above results we get
Since we need to prove that [Sj>Z]≦∈1/n, the proof would be complete if e−α
Rewriting this equation leads exactly to Requirement (R2), which is assumed to hold. This completes the proof.
Note that this proof has barely changed, compared to the original proof in [BT08]. The only difference is that now the scores are counted for all positions i, instead of only those positions where yi=1. However, since in the proof in [BT08] this number of positions was then bounded by l, the result was the same there as well. This explains why the first two assumptions on the parameters are exactly the same as those in [BT08].
For guilty users we have to look carefully where changes occur. We will walk through the proof of [BT08, Theorem 1.2] and note where the formulas change.
Proof of Theorem 8.2. For simplicity, we assume users 1, . . . , c are exactly the c colluders, and we will prove that with high enough probability the algorithm will accuse at least one of them. This then proves that for any coalition of size at most c, we will accuse at least one of the pirates with high probability.
First, we write the total accusation sum of all colluders together as follows.
Here xi is the number of ones on the ith positions of all colluders, qi=√{square root over ((1−pi)/pi)}, and yi is the output symbol of the pirates on position i. Now if no colluder is accused, then all scores of all colluders are below Z. Hence if the total score S exceeds cZ, then at least one of the users is accused. So it suffices to prove that [S<cZ]≦∈2.
Now we use the Markov inequality and a constant β=s√{square root over (δ)}>0 to get
[S<cZ]=
[e−βS>e−βcZ]≦eβcZ
{right arrow over (y)},X,{right arrow over (p)}[e−βS]. (8.4)
Writing out the expectation value over all values of {dot over ({right arrow over (p)} and X, and filling in the above definition of S, gives us
Since each pi is identically and independently distributed accord to F, and yi is either 0 or 1, we can bound the above by
In other words, E0,xi,pi calculates the expectation value for position i when yi=0, while E1,xi,pi calculates the expectation value for position i when yi=1. If xi=0 then all pirates see a 0, hence by the marking assumption yi=0 and we have to take E0,xi,pi. Similarly, if xi=c, then the pirate output is necessarily a 1 and we have to take E1,xi,pi. In all other cases, we bound the value by taking the worst case scenario, where the pirates choose exactly that symbol that leads to the lowest expected increase in their total score, and therefore also the highest value of [e−βS].
The summation is done over all {0, 1} matrices X, while the term inside the summation only depends on the number of ones in each column. So after switching the summation and the product, we can also simply tally all the terms which have the same contribution, to get
Remarking that the summations are actually equivalent and independent for all i, and introducing some more notation, we can write
and p is a random variable distributed according to F. For convenience we will also write E0,x and E1,x for E0,x,p and E1,x,p respectively, as the distribution of p is the same for all values of x.
Now, using β=s√{square root over (δ)}/c, we bound the values −β(xq−(c−x)/q) and −β((c−x)/q−xq)=+β(xq−(c−x)/q) in the exponents of E1,x and E0,x as follows.
So ±β(xq−(c−x)/q)≦s for our choice of β. So we can use the inequality ew≦1+w+h−1(s)w2 which holds for all w≦s, with w=±β(xq−(c−x)/q, to obtain
Introducing more notation, this can be rewritten to
E
0,x
≦F
x
+βE
2,x
+h
−1(s)β2E3,x
E
1,x
≦F
x
+βE
2,x
+h
−1(s)β2E3,x,
where
We first calculate E2,x explicitly. Note that taking p from the Tardos distribution function is equivalent to taking some value r uniformly at random from [δi, π/2−δi] and taking p=sin2(r). Writing the variables p and q in terms of r thus gives us p=sin2(r), 1−p=cos2(r), q=cot(r), 1/q=tan(r), so that E2,x can also be written as
The primitive of the integrand is given by l(r)=sin2x(r)cos2(c−x)(r)/(π−4δi), so we get
We can also bound E2,x from above and below as
We can use these bounds to bound Mx, 0<x<c, to get
Since δ1−δ, the maximum of the two terms is the first term when x≦c/2, and it is the second terms when x>c/2. For the positions where the marking assumption applies, i.e. x=0 and x=c, we do not use the bounds on E2,x, but use the exact formula from (8.6) to obtain
Substituting the bounds on Mx in the summation over Mx from (8.5) gives us
For the summation over E3,x, let us define a sequence of random variables {Ti}ci=1 according to Ti=q with probability p and Ti=−1/q with probability 1−p. Using Equations (8.2) and (8.3) for pi=p and qi=q, we get that p[Ti]=0 and
p[Ti2]=1. Also, since Ti and Tj are independent for i≠j, we have that
p[TiTj]=0 for i≠j. Therefore we can write
But writing out the definition of the expected value, we see that the left hand side is actually the same as the summation over E3,x, so
Also we trivially have that
For the summations over [c/2] and [c/2] terms we use the upper bound
Note that this bound is quite sharp; since δ1−δ, the summation is dominated by the terms with low values of x. Adding the terms with c/2<x<c to the summation has a negligible effect on the value of the summation.
Now applying the previous results to (8.7), and using (1−δ)c≧1−δc for all c gives us that
We would like to achieve that, for some g>0,
Filling in β=s√{square root over (δ)}/c and δ=1/(dδc) and writing out the second inequality, this leads to the requirements that
This is exactly Requirement (R3), which is assumed to hold. So applying the result from Equation (8.8) to Equations (8.4), (8.5) and (8.7) gives us that
Since we want that [S<cZ]≦e−ηk≦(∈l/n)η=∈2, we need that
βcZ−gβl≦−ηk.
Filling in β=s√{square root over (δ)}/c, l=dlc2k, Z=dzck, δ=1/(dδc) and writing out both sides, we get
This is exactly Requirement (R4), which was assumed to hold. This completes the proof.
Compared to [BT08], we see that the third assumption has changed. We see that a 1 has changed to a 2, and a 2 has changed to a 4. The most important change is the 1 changing to a 2, since the terms 1/π (now 2/π) is the most dominant factor (and the only positive term) on the left hand side. By increasing this by a factor 2, we get that g≦2/π instead of g≦1/π. Especially for large c, this will play an important role and this will basically be the reason why the required codelength then decreased by a factor 4, compared to Blayer and Tassa's scheme.
While the other change (the 2/dδπ changing to 4/dδπ) does not make a big impact on the optimal choice of parameters for large c, this change does influence the required codelength for smaller c. Because of this change, we now subtract more from the left hand side, so that the value of g is bounded sharper from above and cannot simply be taken twice as high.
Finally, after using the third assumption in the proof above, the analysis remained the same as in [BT08]. So the fourth assumption is also exactly the same as in [BT08].
We now look at the asymptotic behavior of the scheme as c goes to infinity. When c goes to infinity, the distributions of the scores of both guilty and innocent users converge to the Normal distribution with certain parameters. In [SKC08, Section 6] Skoric et al. investigated this Gaussian approximation, and concluded that with Tardos' choice of g0, g1 and F, the required codelength is l≢(π2/2)c21n(η/∈i). This means that for sufficiently large c we will certainly need that dl≧π2/2.
In Tardos' original paper, Tardos proved that dl=100 is sufficient for c≧16. This shows that either Tardos' choice of parameters was not optimal, or that the proof method is not tight. In [SKC08] the symmetric accusations were introduced, showing that even dl≧π2 is sufficient for proving soundness and completeness, for sufficiently large c. In [BT08] the analysis of the scheme, which was already tightened in [SKC08], was further tightened, but no symmetric accusations were used. Applying asymptotics to their scheme shows that using their analysis, dl>2π2 is sufficient for proving security.
Here we will show that by combining the symmetric accusations from Skoric et al. with the tighter analysis from Blayer and Tassa, as we did above, we can prove security for dl>π2/2. This means that the gap of a factor 2 between provability and reality, as in [SKC08], has now been closed. This is also why we refer to our scheme as the optimal Tardos scheme, as for c→∞ our scheme achieves the theoretical optimal codelength.
Theorem 8.3. For c>>1 the above construction gives an ∈1-sound and ∈2-complete scheme with parameters
Proof. In our scheme we will take optimal values of dl, dz, dδ, d∝, r, s, g such that all requirements are met and dl is minimized. Hence showing that some parameters dl, dz, dδ, d∝, r, s, g exist, which meet all requirements and have dl↓π2/2 as c→∞ is sufficient for proving the Theorem.
For the second requirement, note that we can write it as a quadratic inequality in d∝, as
d
α
2+(−dz)dα+rdl≦0 (8.10)
The constant in front of d∝ is positive, while the function has to be negative. So this requirement is met if and only if d∝ lies between its two roots, which therefore must exist.
Hence taking d∝=dz/2 always satisfies this equation. The only remaining requirement is then that the quadratic equation in d∝ in fact has a real-valued solution. So we need that the term inside the square root is non-negative, i.e.
d
z
2≧4rdl (8.12)
For the first requirement we then see that with dδ=(ln(c)) and r=½+(1/ln(c)), the right hand side converges to 0 as c→∞. Since the left hand side, d∝=dz/2, is positive (our dz will converge to π>0), this requirement will certainly be satisfied for sufficiently large c.
For the third requirement, note that the terms 4/dδπ and h−l(s)s/√{square root over (dδc)} both converge to 0 as c goes to infinity. This means that for sufficiently large c, the inequality will converge to
For the fourth requirement, again note that the term on the right hand side disappears as c goes to infinity. So this inequality converges to
gd
l
−d
z≧0 (8.14)
Taking g≈2/π and solving these equations gives us
d
z≧2rπ (8.15)
d
l
≧rπ
2 (8.16)
With r=½+O(1/ln(c))→½ as c→∞, we thus get
By taking c sufficiently large, one can thus get dl arbitrarily close to π2/2, as was to be shown.
Note that near the end of the above proof, we had two equations
d
z≧2rπ (8.19)
d
l
≧rπ
2 (8.20)
Here we used that r can be taken in the neighborhood of ½ to get the final result, dl>π2/2. In [SKC08] however, no such variable r was used, as it was simply fixed at 1. Taking r=1 in these equations indeed gives
d
z≧2π (8.21)
d
l≧π2 (8.22)
as was the result in [SKC08, Section 5.2]. This thus shows where the proof by Skoric et al. lost the factor 2 in the asymptotic case; if they had taken r as a parameter in their analysis, they would have gotten the same asymptotic results as we did above.
Furthermore note that to make some terms in the third inequality disappear, we needed that dl→∞ as c→∞. This means that in fact the offsets δ and 1−δ converge to 0 and 1 faster than (1/c). This raises the question whether the parameterization δ=1/(dδc) is appropriate; perhaps δ=1/(dδcln(c)) or δ=1/(dδc1+μ) would make more sense, as then dδ may converge to a constant instead. Numerical searches for the optimal choice of parameters for c→∞ show that dδ roughly grows as O(3√{square root over (c)}), which suggests one should take δ=1/(dδc4/3) for some constant dδ. Note that with dδ=O(3√{square root over (c)}), the terms √{square root over (dδ)}/(h(r)√{square root over (c)}) and 4/dδπ are both of order O(c−1/3), and both terms therefore converge to 0 at roughly the same speed. This possibly explains why
Similar to the analysis done in the paper by Blayer and Tassa, we can also investigate the optimal choice of parameters such that all requirements are satisfied, and dl is minimized. As the requirements only changed on two positions, the formulas for the optimal values of dδ, d∝, dz, dl as given [BT08, Section 2.4.5] also only changed slightly. By changes these two numbers in their formulas, we get the following optimal choice for our parameters, for given g, r, s:
One can then numerically find the optimal choice of r>½, s>0 and 0<g<2/π such that dl is minimized.
An optimal solution to the equations for c≧2 and η=1 can be found numerically as follows:
d
l=23.79,dz=8.06,dδ=28.31,dα=4.58,g=0.49,r=0.67,s=1.07 (8.27)
This means that taking the constrants as above, a codelength of l=24c2ln(η/∈1) is sufficient to prove soundness and completeness for all c≧2 and ∈2≧∈1/η. Compared to the original Tardos scheme, which had a codelength of l=100c2ln(η/∈l), this gives an improvement of a factor more than 4. Furthermore we can prove that this scheme is ∈1-sound and ∈2-complete for any value of c≧2, while Tardos' original proof only works for c≧16.
Similarly, we can consider a more practical scenario where ∈2>>∈1/η and numerically find optimal values. If ∈2=½ is sufficient and ∈1=10−3 and η=106, then η≈0.033, and the optimizations give us dl≈10.89 and dz≈5.76. So with this larger value of ∈2, a codelength of l<11c2ln(η/∈1) is sufficient to prove the soundness and completeness properties for any c≧2. Also, if we let c increase in the four requirements (i.e. if we only want provability for c≧c0 for some c0>2), then the requirements become weaker and an even shorter codelength can be achieved. The following two tables show the optimal values of dl for several values of c and η, for both the original Blayer-Tassa scheme and our optimal symmetric Blayer-Tassa-Tardos scheme.
in the Blayer-Tassa scheme.
in the optimal Tardos scheme.
Note that the original Tardos scheme used l=100c2k for all c and η, which translates to always using dl=100. One can see in the table for the optimal Tardos scheme that in many cases, even for reasonably small c and large η, this gives an improvement of a factor 10 or more compared to this original scheme. Compared to the Blayer-Tassa scheme, our optimal scheme gives an improvement of a factor slightly less than 4 in all cases.
In this Chapter we will discuss a dynamic version of the Tardos scheme. Whereas the normal Tardos scheme discussed earlier belongs in the category of probabilistic static schemes, we will show how to construct a probabilistic dynamic scheme based on the Tardos scheme, and why it has advantages over the original Tardos scheme.
Since our dynamic Tardos scheme makes use of analysis from the static Tardos scheme, we will make use of the improved Tardos scheme from Chapter 8 as a building block. Not only does it achieve low provably secure codelengths, but also does the proof of completeness there make use of a variable β=O(√{square root over (δ)}/c) instead of β=O(1/c), as was done in Tardos' extended article to prove completeness for c≦15. The reason why this is useful will appear later.
Let us start with the construction, which can be summarized in a few lines. Instead of distributing all symbols of the codewords simultaneously, we give users one symbol at a time. And instead of looking at scores at time l, we now calculate scores Sj(t)=Σi=1tSji at every time step. But most importantly: at any time t, we throw out all user with scores Sj(t)>Z.
Below is the construction, which now consists of two phases, as the codeword generation and accusation phases are mixed. In this scheme, we take k=┌ log(2n/∈1)┐, which is different from our earlier choice k=┌ log(n/∈1)┐. The factor 2 is put in the logarithm to compensate for the extra factor 2 we get when proving the soundness property. Note that ┌ log(2n/∈1)┐−┌ log(n/∈1)┌≈ log(2)<1, so especially for big values of n and ∈1 we will hardly notice the difference between the two definitions of k.
1. Initialization
(a) Take l=dlc2k as the code length, and take the parameters δ and Z as δ=1/(dδc) and Z=dzck. Compute δi=arcsin(√{square root over (δ)}) such that 0<δi<π/4.
(b) For each fingerprint position 1≦i≦l choose pi independently from the distribution defined by the following cumulative distribution function F:
Its associated probability density function f(p) is based towards δ and 1−δ and symmetric around ½.
(c) Set every user's accumulated score Sj(t) at 0.
2. Codeword generation, accusation
(a) For each position 1≦i≦l, do the following.
Ok, so how does this scheme work, and why does it help that we disconnect users inbetween? Well, for innocent users we expect that things do not change a lot, compared to the static Tardos scheme. The probability of accidently throwing out an innocent user increases compared to the static scheme, since an innocent user could have had Sj(t)>Z and Sj(l)<Z for some 0<t<l. But we will show that, compared to the static Tardos scheme, the false positive probability increases by at most a factor 2.
For guilty users however, we get an important advantage, based on the proof construction from the original Tardos scheme. There, to prove that at least one guilty user gets accused, we proved that S<cZ occurs only with low probability. In all other cases, by the pigeonhole principle, at least one of the scores will be above Z, hence at least one pirate is caught. But now, since we throw out users as soon as their scores exceed Z, we know that pirates will actually never get a score higher than Z/=Z+maxpSji(p), which is relatively close to Z. So the probability of catching all colluders is in fact related to [S<cZ/]: if not all pirates are caught, then it follows that S<cZ/. And since
[S<cZ/]≈
[S<cZ] for Z/≈Z, we see that the probability of not catching all colluders can now be bounded from above by roughly the same ∈2 as the one bounding the probability of not catching any colluders in the static scheme. So by following the Tardos analysis, we can show that the dynamic Tardos scheme will catch all colluders with high probability, and will catch no innocent users with high probability.
For the construction, we again used auxiliary variables dl, dz and dδ, as we did in Chapter 8. We will follow the same proof methods from the static case to prove our results, again based on several assumptions. As it turns out, the following assumption are sufficient.
Let d∝, r, s, g be positive constants with r>½ and let dl, dz, dδ, d∝, r, s, g, η, k satisfy the following four requirements.
Let the dynamic Tardos scheme be constructed as above. Then the following properties hold.
Theorem 9.1 (Soundness). Let j∈U be an arbitrary user, and let C⊂U\{j} be a coalition of any size not containing j. Let p be some pirate strategy employed by this coalition. Then
[j∈σ(ρ(X))]<∈1/m.
Therefore the probability of accusing at least one innocent user is at most ∈1.
Theorem 9.2 (Special completeness). Let C⊂U be a coalition of size at most c, and let ρ be any pirate strategy employed by this coalition. Then
[C⊂σ(ρ(X))]<∈2
Therefore the probability of not accusing all guilty users is at most ∈2.
The completeness-property stated above is different from the completeness-property in the static setting. Here we require that all pirates are caught, instead of a least one.
First we prove the upper bound on the probability that an innocent user accidently gets accused. The bound relates the false positive probability in the dynamic Tardos scheme to the false positive probability in the static Tardos scheme. One can then use the proof of the static Tardos scheme to get an absolute upper bound on the false positive error probability.
Lemma 9.3. Let j∈U be an arbitrary user, and let C⊂U\{j} be a coalition of any size not containing j. Let p be some pirate strategy employed by this coalition. Then
[j∈σ(ρ(X))]≦2·
[Sj(l)>Z] (9.1)
In other words, the probability of disconnecting an innocent user j in the dynamic scheme is at most a factor 2 bigger than the probability of disconnecting an innocent user j in the static Tardos scheme.
Proof. Let j be some innocent user, and let its score at time t be denoted by Sj(t). Let A be the event that an innocent user j gets accused in our dynamic Tardos scheme. In other words, A is the event that Sj(t0)≧Z for some t0∈{0, . . . , l}. Finally let B be the event that user j gets accused in the static Tardos scheme with the same parameters, i.e. the event that Sj(l)≧Z if we were to use the same fingerprinting code in the dynamic case as in the static case.
Now [A|B]=1, as an accusation in the static scheme automatically implies an accusation in the dynamic scheme. For
[B|A], the conditional gives us that there exists some time t0 such that Sj(t0)=Z+∝0 for some ∝0∈[0, √{square root over (1/δ)}]. Since
[Sj(t+1)−Sj(t)]=0 and |Sj(t+1)−Sj(t)|≦√{square root over (1/δ)} with probability 1 for any time t, the process {Sj(t)}∞t=0 describes a random walk with zero drift. In fact, the process {Sj(t)}∞t=0 starting at time t0 is also a random walk with zero drift.
Therefore we have [Sj(l)≧Sj(t0)]=½ and thus
[Sj(l)≧Z|Sj(t0)≧Z]≧½. If [Sj(t0)>Sj(l)>Z], then the event {Sj(l)≧Z|Sj(t0)≧Z} does take place but {Sj(l)≧Sj(t0)} does not hold. This explains the use of a greater-equal sign instead of an equality sign. So
[B|A]≧½.
Finally we use Bayes' Theorem, which says that for any two events A and B, [A]
[B|A]=
[B]
[A|B]. Applying this to our A and B gives us
[A]≦2.
[B], hence
[j∈σ(ρ(X))]≦2·
[Sj(l)>Z].
Proof of Theorem 9.1. We follow the analysis from Section 8.3, using Requirement (R1), to get
Using the Result from Lemma 9.3 we get
[j∈σ(ρ(X))]≦2·
[Sj(l)>Z]≦2e−αZ+rα
We want to get that
[j∈σ(ρ(X))]≦2e−k≦∈1/n (9.4)
So the proof would be complete if
−αZ+rα2l≦−k (9.5)
Rewriting this inequality gives exactly Requirement (R2), which was assumed to hold. This completes the proof.
Lemma 9.4. Consider the following modification of the static Tardos fingerprinting game.
First users are assigned codewords according to the static Tardos scheme. Then, instead of being forced to output the complete forgery {right arrow over (y)}, colluders are allowed to choose any position i and send to the distributor a symbol yi, satisfying the marking assumption. Then the distributor sends to all users the value of pi, and symbol yi can no longer be changed. This is repeated until all l positions are filled out by the coalition.
In this scenario, we have:
static
[S≦cZ′]≦∈
2/2 (9.6)
Proof. The proof of completeness from Section 8.4 does not use the fact that colluders do not know pi when choosing yi/, for i≠i/. So we will simply follow the proof, and replace the occurrences of Z by Z/.
From the analysis of Section 8.4 and Requirement (R3) it follows that
{right arrow over (y)},X,{right arrow over (p)}
[e
−βS
]≦e
−gβl. (9.7)
Combining this with the Markov inequality as before, we get
[S<cZ′]≦eβcZ′
{right arrow over (y)},X,{right arrow over (p)}[e−βS]≦eβcZ′−gβl. (9.8)
Since we want that [S<cZ/]≦e−ηk≦(∈1/2n)η=∈2/2, we need that
βcZ′−gβl≦−ηk.
Filling in β=s√{square root over (δ)}/c, l=dlc2k, Z/=dzck+1/√{square root over (δ)}, δ=1/(dδc) and writing out both sides, we get
This is exactly requirement 4, which completes the proof.
Proof of Theorem 9.2. Assume there exists some pirate strategy ρd in the dynamic game such that with probability ρ>∈2 at least one pirate survives up to the time l. In these cases we have that the total sum of the pirates' scores at time l is below cZ/, i.e. S(l)<cZ/. We now choose the following strategy ρs for the modified static model, so that the score in the static model in these cases is also below cZ/ with probability more than ½. This means that with this strategy for the modified static model, with probability ρ/2>∈2/2, the total score will be below cZ/. From Lemma 9.4 we then get a contradiction, which means that our assumption was wrong; there are no strategies ρd in the dynamic Tardos scheme such that with probability more than ∈2 not all colluders are accused. This then completes the proof.
After receiving all codewords in the static game, we first choose position 1 of strategy ρs. We make use of an oracle O, which will play the dynamic Tardos game with anyone, using the good strategy ρd. We tell the oracle that no one is disconnected, and we send the oracle the first positions of our code. Then O returns a value yi, and we forward it to the distributor. He send us the value p1, and we calculate scores Sk(1) for each user. We now send O the second positions of our code, but only those for which Sj(t) has not yet exceeded Z in the past. Once a users score exceeds Z at some time t0, we set Sj(t)=Sj(t0) for all t0≦t≦l, and we add the scores Sji, j>t0 to some other score function Uj(t).
After this, the oracle then sends us y2, and we forward it to the distributor. He sends up p2, we calculate the scores Sj(2) and we again send the subset of the seen symbols to O. We repeat this procedure, until all positions have been sent to the distributor.
Using this strategy, we have played a dynamic game with O, acting as a distributor, with the same matrix X that was used for the static game we played with the real distributor, acting as a coalition. If X belongs to one of the cases for which ρd lets at least one pirate survive, then ΣSj(l)<cZ/. However, in the static game, the scores for each user are given by Sj=Sj(l)+Uj(l), and the total coalition score is given by S=S(l)+U(l).
If a user j is disconnected at time t0, then he no longer participates in the coalition. This means that for i>t0, the output yi and the symbol Xji are independent. Hence the score Sji is then a random variable, which is bigger/smaller than 0 with probability ½. Since the score U(l) is based on those symbols that were not taken into account for outputting yi, the score U(l) is also bigger/smaller than 0 with a probability ½.
So concluding, we get that using the oracle O, we can construct a strategy for the static game such that with probability at least ρ/2>∈2/2 not all users are caught. This completes the proof.
Some advantages of our construction are as follows.
1. With this scheme, we have certainty about catching all pirates, rather than at least one pirate.
2. The scheme uses a binary alphabet, which is the smallest possible alphabet.
3. The codelength needed (the time needed) is relatively short. It is at most (c2ln(n/∈1)) but depending on whether all pirates are caught earlier we may terminate sooner.
4. Codewords of users are independent, so it is impossible to frame a specific user.
5. Codeword positions are independent of earlier positions, so for generation of the codewords we do not need the feedback from the pirates. In other words, at each time one can calculate the next value of pi and the symbols ({right arrow over (x)}j)i for each user, without having to wait for the output {right arrow over (y)}i−1.
6. It is also possible to generate the whole code and the codewords in advance, and store these codewords at the clientside. The scheme only needs to be able to disconnect a user from the system during the process. This is also the only reason why this scheme does not work in a static environment; then the pirates could sacrifice one user for all positions, while here after some steps you want to disconnect that user and force the others to contribute. So the fact that this scheme is in a sense only semi-dynamic could give practical advantages over a full dynamic scheme.
7. One does not need to store the codewords or the values of p but only the current accusation scores for each user in this scheme. So the total storage theoretically required for this scheme is constant for each user.
8. Suppose in a static scheme you need a codelength of l=10000 to catch at least one pirate with ∈2=1/100 error, while with the same parameters and l=1000 the error probability ∈2 is bounded by ½. Then in the static scheme you always need length 10000, while in the dynamic scheme at least half of the runs will take at most 1000 runs until all pirates are disconnected. In the other half of the cases the time needed is still at most 10000, and the overall error probability of not catching all colluders is still the same. So not only do we catch all pirates, but on average the time needed may also be reduced drastically.
, rounded to the nearest integer, for several values of c and η = In(∈2)/In(∈1/η), and for k = In(109).
, rounded to the nearest integer, for several values of c and η = In(∈2)/In(∈1/η), and for k = In(106).
We discussed probabilistic static schemes, where we were given values of n, c, ∈1, ∈2, and where we were looking for schemes which are collusion-resistant against c colluders with n users in total and maximum error rate of ∈1 and ∈2 respectively. The most important thing to notice here is that c is given in advance; the maximum collusion size is given, and we construct a code specifically for a coalition of this size.
Deterministic dynamic schemes are schemes that catch any coalition of any size c in polynomial time in c. One advantage of these schemes is thus that c does not need to be known in advance; during the process we can figure out what c is, and by just adjusting the alphabet size and the time (length) we are able to catch any coalition. This also makes more sense in practice; pirates do not announce their collusion size, so c is usually unknown.
So one natural question one could ask is: Can we construct a static scheme that works even when c is unknown? For deterministic static schemes the answer is simply no. Any scheme that is resistant against c colluders has at least codelength Ω(c2), hence for c→∞ the codelength has to go to infinity to always be sure to catch a guilty user. However, for probabilistic static schemes, the answer is not simply no. One could try to construct a universal code that does not depend on c, and see if we can say something about the error probability of our scheme for given l and some unknown c. Of course as l is fixed and c goes to infinity, the error probability goes to 1, but with l sufficiently large compared to c one might still be able to bound the error probability for any c.
Note that this is really something non-trivial. One might argue that, say, the Tardos scheme that is resistant against 50 colluders is actually resistant against any collusion of size at most 20, so that we can catch smaller coalitions with that scheme as well. However, the proof then only works for the codelength that belongs to the value of c=20. In other words, we cannot use the Tardos code with l=100·202·ln(n/∈1) for 20 colluders, and take only 100·32·ln(n/∈1) of these symbols to catch a coalition of size 3. Then we also need all 100·202·ln(n/∈1) symbols to make the proof work.
As it turns out, we can indeed construct such a universal code based on the Tardos code, by moving the dependence on c from the code to the score function. In that way, we can always use the same code regardless of c, and catch a coalition of any size with this code as long as the codelength is long enough.
Let us briefly go back to the original Tardos code. Looking closely, we see that there is a chain of dependencies which make the code dependent on c: We take Xji˜Ber(pi), we take pi˜F=Fδ and we take δ=1/(300c). So the values of Xji (indirectly) depend on the value of c. Can't we simply take δ as some number that does not depend on c? Well, for the proof of soundness we use that ∝Sji≦∝√{square root over ((1−δ)/δ)}≦∝√{square root over ((1/δ))}≦√{square root over (3)}<1.74. In other words, the score functions g0(p)=−√{square root over (p/(1−p))} and g1(p)=+√{square root over ((1−p)/p)} need to be bounded from above by some values (depending on c) of the proof to work. This implies that δ has to be sufficiently large; if it were smaller, then the values of p could be too close to 0 and the score functions would go to ±∞.
On the other hand, we also need that δ is sufficiently small. In the proof of completeness we introduce integrals at some point to calculate an expected value, and because of the bounds δ and 1−δ a function rolls out that depends on δ. Later on we show that this terms is sufficiently small, by using that δ is as small as 1/(300c).
So getting rid of the code-dependence on c is not that simple. The code indirectly depends on δ, and this δ has to be sufficiently small but also sufficiently large for the proof to work. So let us again look at the probability density function we need for a specific value of c.
The important observation now is that this function ƒc(p) is actually almost the same for different values of c; for different c we merely use a different scaling of this function to make sure the integral sums up to 1. We can find a simple relation between ƒc(p) and ƒĉ(p) as below.
In particular, writing ƒ(p)=limc→∞ƒc(p)=1/π√{square root over (p(1−p))}, which is a function that does not depend on c, we get the relation:
The function ƒc(p) and the term π−4δ/c are chosen such that integrating ƒc from δc to 1−δc gives exactly 1, i.e. this is a probability density function. Using the above relation, this means that integrating ƒ from δc to 1−δc gives only
Since δ/c=arcsin(√{square root over (δ)}c)=arcsin(√{square root over (1/(300c))}) is very small, integrating ƒ(p) from δc to 1−δc gives roughly 1. So the area below the curve between the offsets δc and 1−δc is roughly one.
So what we could do, is simply use the function ƒ for our codeword generation, regardless of c. Then, during the accusation phase, we first pick a c for which we want to calculate scores. For this c, we then need that the values of pi were taken according to ƒc. Therefore for this value of c we simply disregard all fingerprint positions for which pi was not in the range [δc, 1−δc]. We just calculated the losses here, and saw that we are actually throwing away only a small percentage of the data. Then for the remaining positions, the values of pi are actually according to the distribution ƒc, since ƒc is just a rescaled version of ƒ between the offsets δc and 1−δc. The mechanics for the Tardos scheme thus remain the same, and the proofs go analogously.
A detail we now want to check exactly is how much we are really throwing away for each c, for our accusation functions. For this we give a plot of the values of 1−4δ/c/π for different values of c. Multiplying these values by 100 gives the percentage of the fingerprint positions that are used for this c. With c→∞ this rate goes to 1 as then all fingerprint positions can be used, while the minimum is at c=2 when only 94.8% of the data can be used and 5.2% has to be thrown away. For c≧3 the loss is less than 5 percent; for c≧4 the loss is less than 4 percent; for c≧7 the loss is less than 3 percent; for c≧14 the loss is less than 2 percent; and for c≧55 the loss is less than 1 percent.
Similarly, one could check for given l how large l/ has to be such that after removing the data outside the boundaries we still have f positions left. For c≧3 we need at most 5 percent more data; for c≧4 we need at most 4 percent more; for c≧7 we need at most 3 percent more; for c≧15 we need at most 2 percent more; and for c≧56 we need at most 1 percent more fingerprinting positions.
One way to implement the above into a scheme is the following. We keep counters l2, . . . , lc, . . . for all values of c, counting how many values of pi(1≦i≦t) are between δc and 1−δc, i.e. how many of the fingerprinting positions so far are useful for a specific value of c. This basically means that lc is the effective codelength of the code so far for catching c colluders. As we saw above, lc is slightly smaller than the real codelength l up to this point, and li≦lj for i<j.
Then, once l2 is sufficiently large, e.g. l2=100·22·ln(n/∈1), we calculate the scores for each user for these useful values of pi. If we catch the colluders we are done, and else we continue. We do the same for c=3, . . . until we find the collusion. This way we will catch any collusion of size at most c in at most Ac2ln(n/∈1)(1+r) time, with error probability at most ∈2. The probability of accusing no innocent users is then at least (1−∈1)c≈1−c∈1, i.e. the false positive rate increases by a factor c. However, the codelength barely increases compared to the case for known c, especially when c is large.
Another way to implement the scheme is using the above outline, but by using ∈1/i2 as the allowed false positive error probability for any guess i for the number of colluders. Then most of the asymptotics remain the same, but the probability of not accusing any innocent users is then at least Πci=2(1−∈1/i2)≈1−(1+1/4+1/9+ . . . +1/c2)∈1≧1−(π2/6)∈1.
Let us now give the construction in full, with a full explanation of what is going on afterwards. Let n≧2 be a positive integer, and let ∈1, ∈2∈(0, 1) be the desired upper bounds for the soundness and completeness error probabilities respectively. Let {right arrow over (ν)}(i) be the characteristic vector such that vc=1 if pi∈[δc, 1−δc] and vc=0 otherwise. Then the universal Tardos traitor tracing scheme works as follows.
1. Initialization
2. Generation/Distribution/Accusation
For each time i≧1 do the following.
Let us explain a bit more what we are actually doing here. First, we take some vector {right arrow over (k)} which must sum to at most one. The purpose of these constants is the following. For each c, we will bound the false positive probability by what is inside the logarithm of kc, i.e. kc·∈1/n. However, since we run all these schemes simultaneously, the total false positive probability for a single user is bounded from above by summing over all false positive probabilities for each c. So the probability that an innocent user is ever accused is bounded by Σ∞c=1kc∈1/n=∈1/nΣ∞c=1kc. Since we want this to be bounded from above by ∈1/n, we get the requirement that Σ∞c=1kc≦1. One way to realize this is to take e.g. kc=(½)c, so that kc decreases exponentially in c. However, then we get kc=ln(n/kc∈1)=ln(2cn/∈1)=O(cln(n/∈1)), so that lc=O(c3ln(n/∈1)) which is not what we want. Fortunately we can also take e.g. kc=6/π2c2 (using that Σ∞i=11/c2=π2/6), so that kc=(ln(n/∈1)) and lc=O(c2ln(n/∈1)). If c is expected to be small, then to make l1, l2, l3, . . . as small as possible, it is better to take kc=O(1/cN) for some large N, so that most of the weight is at the beginning. However, if c may be large, then kc=(1/c1+
Now let us continue with the construction. The constants kc play the role of k=ln(n/∈1) in the previous chapters, except now there is this extra term inside the logarithm, making the values of k different of each c. The parameters {right arrow over (λ)}, {right arrow over (ç)}, {right arrow over (d)}, {right arrow over (∝)}, {right arrow over (ρ)}, {right arrow over (γ)}, {right arrow over (σ)} play the roles dl, dz, dδ, d∝, r, g, s respectively, expect that now these are also vectors. The renaming is done for convenience, to avoid double indices. Note that (US1), (US2), (UC1), (UC2) are the same as (S1), (S2*), C1′), (C2*) from the previous chapter, only with variables renamed. Next we also take lc, Zc, δc for each c differently, using these variables λc, . . . , σc. Finally we initialize all scores for all users at 0, and the counter of used positions for each c is set at 0. These counters tc will count how many of the pis up to now were between δc and 1−δc, i.e. how many positions were not discarded for this value of c.
Then comes the actual distribution/accusation phase. For each time, we first generate a value pi according to F∞(which does not depend on c). This pi is then used to generate symbols Xji, as is usual in the Tardos scheme. The symbols are distributed, and if a pirate transmitter is still active, we assume we will intercept some pirate output yi. If no output is received, then we are happy. Either we can wait and repeat the same symbols until output is received (which only means the pirates have lost part of the content for their distribution), or we can at some point terminate, concluding that we must have caught all pirates. Of course this also depends on the scenario, e.g. if before the pirate output stopped no user was disconnected, then a pirate is still active and one may want to continue to wait. Then, for each user j we calculate the value Sji (which in fact also does not depend on c), but for updating the scores we now only increase those scores (Sj)c for which pi∈[δc, 1−δc]. This is done simply by adding Sji·{right arrow over (ν)}i, since {right arrow over (v)}i has the nice property of indicating for which values of c the scores should be updated. Similarly, the counters are updated simply by adding {right arrow over (v)}i to {right arrow over (t)} (adding 1 only for those c for which these symbols were used), and for each user j and coalition size c we check whether the cth score of user j exceeded the cth threshold Zc. Note that in most cases, (Sj)c≈(Sj)c+1 while Zc and Zc+1 may be quite far apart.
After this, the process repeats for the next value of i, generating new symbols, distributing them, updating scores and disconnecting users. The process terminates if, as mentioned above, no pirate output is received anymore, but again, depending on the application, one may want to have different rules for termination (e.g. after a fixed number of positions the process simply has to stop).
Let us now formally prove results above this scheme. Using the above construction, we get the following results, which can be easily proved using results from previous chapters.
Theorem 10.1. Let the universal Tardos scheme be constructed as above. Then the probability of ever accusing an innocent user is at most ∈1/n, hence the probability of never disconnecting any innocent users is at least 1−∈1.
Proof. We chose the parameters λc, . . . , σc such that they satisfy the requirements from the dynamic Tardos scheme with parameter c and error probability ∈1, c=kc∈1. Hence for each c we know that the probability of having (Sj)c>Zc before the time i when tc=lc is at most kc∈1/n. So the probability that the user is ever accused is bounded from above by:
The proof can again be completed by noting that (1−∈1/n)n≧1−∈1.
Theorem 10.2. Let the universal Tardos scheme be constructed as above. Let C be a coalition of some a priori unknown size c. Then the probability that by the time i when tc=lc some members of the coalition are still active is bounded from above by ∈2.
Proof. We chose the parameters λc, . . . , σc such that they satisfy the requirements from the dynamic Tardos scheme with parameter c, so the result follows from the proofs given in the previous Chapter.
Theorem 10.3. Let the universal Tardos scheme be constructed as above. Let Tc be the time at which we see the lcth value of pi between [δc, 1−δc]. Then Tc−lc is distributed according to a negative binomial distribution, with parameters r=lc and p=4/πδ/c. Hence Tc has mean μ=lc/(1−p) and variance σ2=lcp/(1−p)2, and [Tc≧μ+a] for a >0 decreases exponentially in a.
Proof. The fact that Tc−lc follows a negative binomial distribution can be easily verified by checking the definition of the negative binomial distribution, while we are waiting for r=lc successes, with each success happening with probability p=1−(pi∈[δc, 1−δc]). Finally the mean and variance of the negative binomial, as well as the size of the tails, are well known.
To summarize the results, we see that the number of symbols Tc needed to reach lc useful symbols is a random variable with mean lc/(1−4/πδ/c) and exponentially small tails, and one we reach this time Tc, we know that with probability 1−∈2 we will have caught all guilty users of the coalition, if the coalition had size at most c. Furthermore the probability of ever disconnecting an innocent user is at most ∈1.
Let us use the sequence kc=6/(π2c2); such that ρ∞c=1kc=1. Then for fixed c we get lc=λcc2ln(nc2π2/6∈1), so for constant λc we get lc=(c2ln(n/∈1)). Let n=1000 and ∈1=∈2=0.01. Then we can take the parameters as in Tables 10.1 and 10.2, giving codelengths lc=(c2ln(n/∈1)).
Again let us use the sequence kc=6/(π2c2). Let n=100 and c=3, and let ∈1=0.01 and ∈2=0.5.
So the scheme we saw above is able to catch coalitions of any a priori unknown size c in (c2ln(n/∈1)) time, with arbitrary high probability. This is already a huge improvement over earlier results from Tassa [Tas05]. However, this is not all, as this scheme has many more advantages.
First of all, as we also saw with the dynamic Tardos scheme, the code is independent of the pirate output. The only thing we use the pirate output for is to disconnect users inbetween. This means that we could theoretically generate the whole vector pi and the whole code matrix X in advance, instead of inbetween rounds. This means we will never have to worry about the time between receiving pirate output and sending new symbols, as this can be done instantly. Also, this means that one could try to somehow store the part of the matrix belonging to user j (i.e. {right arrow over (x)}j) at the client side of user j, instead of distributing symbols one at a time. If this can somehow be made secure, so that users cannot tamper with their codewords, then this would save the distributor of having to send symbols to each user over and over. Instead, he could then send the whole codeword at the start, and then start the process of distributing content and disconnecting users. This could be a real advantage of this scheme, as private messages to each user are generally costly.
over time. Regardless of the strategy used, this line will be close to linear in t, which intuitively shows that the scheme will eventually catch all pirates, as the thresholds only grow as O(√{square root over (T)}). So even if the scheme fails to catch all c pirates before time lc, with even higher probability of all c pirates will be caught before time lc+1.
Secondly, note that the whole construction is basically identical for every time i. The symbols are always generated using the same distribution function ƒ∞, and the score function never changes either. So in fact, if e.g. at some time i0 a second pirate broadcast is detected, one could start a second universal Tardos scheme, running simultaneously with the first one. Both traitor tracing algorithms could use the same symbols for their score functions, and both coalitions can be traced simultaneously. The probability that an innocent user is accused in one of the two schemes is then bounded by 2∈1 rather than ∈1, but this can be solved by simply taking ∈/1=∈1/2. One could start generalizing this, and make statements like any set of coalitions (with cardinality constant in c, n) of size at most c can be traced in O(c2ln(n/∈1)) time, taking ∈/1 as ∈1 divided by the cardinality of the set of coalitions. In any case, this shows that we can trace multiple coalitions simultaneously, even if the pirate broadcasts do not start at the same time.
Thirdly, note that for some fixed values of c and ∈1, we get some threshold value Zc and a length lc to use for this dynamic Tardos scheme. If however we used a different value ∈/1, we would have had a different value of Zc and a different codelength lc, but the process would be the same. This means that in our scheme, before time lc(∈/1) we could also check whether user scores exceed Zc(∈/1). In other words: besides running the dynamic Tardos scheme for each c, for some fixed ∈1, we could also simultaneously run the dynamic Tardos scheme for each c, for some other fixed ∈/1. Here we do get in trouble when really running these schemes simultaneously (since you have to decide whether you disconnect a suspect or not), but one could use these other thresholds Zc(∈/1) to calculate some sort of probability that a user is guilty. First the pirate would cross a 90% barrier (i.e. the probability that innocent users cross this line is <10%), then a 95% barrier, and when he crosses a 99% barrier he is disconnected. Then already before the user is disconnected, we can give a statistic to indicate the ‘suspiciousness’ of this user. If a user then does not cross the final barrier, one could still decide whether to disconnect him later.
Finally, another advantage of this scheme is another consequence of the fact that the scheme is identical for every i, namely that we can concatenate several instances of this process to form one larger process. For example, suppose one movie is broadcast, and during the tracing process for this movie no users or only few users are caught. Then the pirates remain active, and when another movie is broadcast (possibly soon after, or only weeks after) they could start broadcasting again. By initializing the scores of users by the scores they had at the end of the first movie (and also loading the counters tc), one could start the tracing process with the pirates probably already having a pretty high score. So then the pirates will sooner hit the roof and be disconnected, than if we had to start over with scores 0 for everyone.
It will be appreciated that the architecture of the system 400 illustrated in
It will be appreciated that embodiments may be implemented using a variety of different information processing systems. In particular, although
It will be appreciated that the system 400 may be any type of computer system, such as one or more of: a games console, a set-top box, a personal computer system, a mainframe, a minicomputer, a server, a workstation, a notepad, a personal digital assistant, and a mobile telephone.
It will be appreciated that, insofar as embodiments are implemented by a computer program, then a storage medium and a transmission medium carrying the computer program form aspects of the invention. The computer program may have one or more program instructions, or program code, which, when executed by a computer (or a processor) carries out an embodiment of the invention. The term “program,” as used herein, may be a sequence of instructions designed for execution on a computer system, and may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, source code, object code, a shared library, a dynamic linked library, and/or other sequences of instructions designed for execution on a computer system. The storage medium may be a magnetic disc (such as a hard drive or a floppy disc), an optical disc (such as a CD-ROM, a DVD-ROM or a BluRay disc), or a memory (such as a ROM, a RAM, EEPROM, EPROM, Flash memory or a portable/removable memory device), etc. The transmission medium may be a communications signal, a data broadcast, a communications link between two or more computers, etc.
Number | Date | Country | Kind |
---|---|---|---|
1110254.8 | Jun 2011 | GB | national |
This application claims priority to International Patent Application No. PCT/EP2012/058033, filed May 2, 2012, which claims priority to GB 1110254.8, filed Jun. 17, 2011, the disclosure of which is hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2012/058033 | 5/2/2012 | WO | 00 | 3/13/2014 |