Image marking with error correction

Information

  • Patent Grant
  • 6317505
  • Patent Number
    6,317,505
  • Date Filed
    Wednesday, November 3, 1999
    25 years ago
  • Date Issued
    Tuesday, November 13, 2001
    23 years ago
Abstract
A method of marking a work of authorship without apparent evidence of data alteration. The work (e.g., an image) is represented by a set of data elements (e.g., pixels). The marking includes providing first plural-bit data, and computing therefrom additional, error correcting data. This composite set of data is used in image marking, permitting at least certain errors to be discerned and corrected when the marking is later discerned from the image.
Description




BACKGROUND OF THE INVENTION




Various images in traditional print or photographic media are commonly distributed to many users. Examples include the distribution of prints of paintings to the general public and photographs and film clips to and among the media. Owners may wish to audit usage of their images in print and electronic media, and so require a method to analyze print, film and digital images to determine if they were obtained directly from the owners or derived from their images. For example, the owner of an image may desire to limit access or use of the image. To monitor and enforce such a limitation, it would be beneficial to have a method of verifying that a subject image is copied or derived from the owner's image. The method of proof should be accurate and incapable of being circumvented. Further, the method should be able to detect unauthorized copies that have been resized, rotated, cropped, or otherwise altered slightly.




In the computer field, digital signatures have been applied to non-image digital data in order to identify the origin of the data. For various reasons these prior art digital signatures have not been applied to digital image data. One reason is that these prior art digital signatures are lost if the data to which they are applied are modified. Digital images are often modified each time they are printed, scanned, copied, or photographed due to unintentional “noise” created by the mechanical reproduction equipment used. Further, it is often desired to resize, rotate, crop or otherwise intentionally modify the image. Accordingly, the existing digital signatures are unacceptable for use with digital images.




SUMMARY OF THE INVENTION




The invention includes a method and system for embedding image signatures within visual images, applicable in the preferred embodiments described herein to digital representations as well as other media such as print or film. The signatures identify the source or ownership of images and distinguish between different copies of a single image. In preferred embodiments, these signatures persist through image transforms such as resizing and conversion to or from print or film and so provide a method to track subsequent use of digital images including derivative images in print or other form.




In a preferred embodiment described herein, a plurality of signature points are selected that are positioned within an original image having pixels with pixel values. The pixel values of the signatures points are adjusted by an amount detectable by a digital scanner. The adjusted signature points form a digital signature that is stored for future identification of subject images derived from the image.




The preferred embodiment of the invention described herein embeds a signature within the original image by locating candidate points such as relative extrema in the pixel values. Signature points are selected from among the candidate points and a data bit is encoded at each signature point by adjusting the pixel value at and surrounding each point. Preferably, the signature is redundantly embedded in the image such that any of the redundant representations can be used to identify the signature. The signature is stored for later use in identifying a subject image.




According to a preferred embodiment, the identification of a subject image includes ensuring that the subject image is normalized, i.e., of the same size, rotation, and brightness level as the original image. If not already normalized, the subject image is normalized by aligning and adjusting the luminance values of subsets of the pixels in the subject image to match corresponding subsets in the original image. The normalized subject image is then subtracted from the original image and the result is compared with the stored digital signature. In an alternate embodiment, the normalized subject image is compared directly with the signed image.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

is a diagram of a computer system used in a preferred embodiment of the present invention.





FIG. 2

is a sample digital image upon which a preferred embodiment of the present invention is employed.





FIG. 3

is a representation of a digital image in the form of an array of pixels with pixel values.





FIG. 4

is a graphical representation of pixel values showing relative minima and maxima pixel values.





FIG. 5

is a digital subject image that is compared to the image of

FIG. 2

according to a preferred embodiment of the present invention.











DETAILED DESCRIPTION OF THE INVENTION




The present invention includes a method and system for embedding a signature into an original image to create a signed image. A preferred embodiment includes selecting a large number of candidate points in the original image and selecting a number of signature points from among the candidate points. The signature points are altered slightly to form the signature. The signature points are stored for later use in auditing a subject image to determine whether the subject image is derived from the signed image.




The signatures are encoded in the visible domain of the image and so become part of the image and cannot be detected or removed without prior knowledge of the signature. A key point is that while the changes manifested by the signature are too slight to be visible to the human eye, they are easily and consistently recognizable by a common digital image scanner, after which the signature is extracted, interpreted and verified by a software algorithm.




In contrast to prior art signature methods used on non-image data, the signatures persist through significant image transformations that preserve the visible image but may completely change the digital data. The specific transforms allowed include resizing the image larger or smaller, rotating the image, uniformly adjusting color, brightness and/or contrast, and limited cropping. Significantly, the signatures persist through the process of printing the image to paper or film and rescanning it into digital form.




Shown in

FIG. 1

is a computer system


10


that is used to carry out an embodiment of the present invention. The computer system


10


includes a computer


12


having the usual complement of memory and logic circuits, a display monitor


14


, a keyboard


16


, and a mouse


18


or other pointing device. The computer system also includes a digital scanner


20


that is used to create a digital image representative of an original image such as photograph or painting. Typically, delicate images, such as paintings, are converted to print or film before being scanned into digital form. In one embodiment a printer


22


is connected to the computer


12


to print digital images output from the processor. In addition, digital images can be output in a data format to a storage medium


23


such as a floppy disk for displaying later at a remote site. Any digital display device may be used, such a common computer printer, X-Y plotter, or a display screen.




An example of the output of the scanner


20


to the computer


12


is a digital image


24


shown in FIG.


2


. More accurately, the scanner outputs data representative of the digital image and the computer causes the digital image


24


to be displayed on the display monitor


14


. As used herein “digital image” refers to the digital data representative of the digital image, the digital image displayed on the monitor or other display screen, and the digital image printed by the printer


22


or a remote printer.




The digital image


24


is depicted using numerous pixels


24


having various pixel values. In the gray-scale image


24


the pixel values are luminance values representing a brightness level varying from black to white. In a color image the pixels have color values and luminance values, both of which being pixel values. The color values can include the values of any components in a representation of the color by a vector.

FIG. 3

shows digital image


24


A in the form of an array of pixels


26


. Each pixel is associated with one or more pixel values, which in the example shown in

FIG. 3

are luminance values from 0 to 15.




The digital image


24


shown in

FIG. 2

includes thousands of pixels. The digital image


24


A represented in

FIG. 3

includes 225 pixels. The invention preferably is used for images having pixels numbering in the millions. Therefore, the description herein is necessarily a simplistic discussion of the utility of the invention.




According to a preferred embodiment of the invention numerous candidate points are located within the original image. Signature points are selected from among the candidate points and are altered to form a signature. The signature is a pattern of any number of signature points. In a preferred embodiment, the signature is a binary number between 16 and 32 bits in length. The signature points may be anywhere within an image, but are preferably chosen to be as inconspicuous as possible. Preferably, the number of signature points is much greater than the number of bits in a signature. This allows the signature to be redundantly encoded in the image. Using a 16 to 32 bit signature, 50-200 signature points are preferable to obtain multiple signatures for the image.




A preferred embodiment of the invention locates candidate points by finding relative maxima and minima, collectively referred to as extrema, in the image. The extrema represent local extremes of luminance or color.

FIG. 4

shows what is meant by relative extrema.

FIG. 4

is a graphical representation of the pixel values of a small portion of a digital image. The vertical axis of the graph shows pixel values while the horizontal axis shows pixel positions along a single line of the digital image. Small undulations in pixel values, indicated at


32


, represent portions of the digital image where only small changes in luminance or color occur between pixels. A relative maximum


34


represents a pixel that has the highest pixel value for a given area of the image. Similarly, a relative minimum


36


represents a pixel that has the lowest pixel value for a given area of the image.




Relative extrema are preferred signature points for two major reasons. First, they are easily located by simple, well known processing. Second, they allow signature points to be encoded very inconspicuously.




One of the simplest methods to determine relative extrema is to use a “Difference of Averages” technique. This technique employs predetermined neighborhoods around each pixel


26


; a small neighborhood


28


and a large neighborhood


30


, as shown in

FIGS. 2 and 3

. In the present example the neighborhoods are square for simplicity, but a preferred embodiment employs circular neighborhoods. The technique determines the difference between the average pixel value in the small neighborhood and the average pixel value of the large neighborhood. If the difference is large compared to the difference for surrounding pixels then the first pixel value is a relative maxima or minima.




Using the image of

FIG. 3

as an example, the Difference of Averages for the pixel


26


A is determines as follows. The pixel values within the 3×3 pixel small neighborhood


28


A add up to 69; dividing by 9 pixels gives an average of 7.67. The pixel values within the 5×5 pixel large neighborhood


30


A add up to 219; dividing by 25 pixels gives an average of 8.76 and a Difference of Averages of −1.09. Similarly, the average in small neighborhood


28


G is 10.0; the average in large neighborhood


30


G is 9.8; the Difference of Averages for pixel


26


G is therefore 0.2. Similar computations on pixels


26


B-


26


F produce the following table:
























26A




26B




26C




26D




26E




26F




26G































Small Neighborhood




7.67




10.56




12.89




14.11




13.11




11.56




10.0






Large Neighborhood




8.76




10.56




12.0




12.52




12.52




11.36




98.0






Difference of




−1.09




0.0




0.89




1.59




0.59




0.2




0.2






Averages














Based on pixels


26


A-


26


G, there may be a relative maximum at pixel


26


D, whose Difference of Averages of 1.59 is greater than the Difference of Averages for the other examined pixels in the row. To determine whether pixel


26


D is a relative maximum rather than merely a small undulation, its Difference of Averages must be compared with the Difference of Averages for the pixels surrounding it in a larger area.




Preferably, extrema within 10% of the image size of any side are not used as signature points. This protects against loss of signature points caused by the practice of cropping the border area of an image. It is also preferable that relative extrema that are randomly and widely space are used rather than those that appear in regular patterns.




Using the Difference of Averages technique or other known techniques, a large number of extrema are obtained, the number depending on the pixel density and contrast of the image. Of the total number of extrema found, a preferred embodiment chooses 50 to 200 signature points. This may be done manually by a user choosing with the keyboard


16


, mouse


18


, or other pointing device each signature point from among the extrema displayed on the display monitor


14


. The extrema may be displayed as a digital image with each point chosen by using the mouse or other pointing device to point to a pixel or they may be displayed as a list of coordinates which are chosen by keyboard, mouse, or other pointing device. Alternatively, the computer


12


can be programmed to choose signature points randomly or according to a preprogrammed pattern.




One bit of binary data is encoded in each signature point in the image by adjusting the pixel values at and surrounding the point. The image is modified by making a small, preferably 2%-10% positive or negative adjustment in the pixel value at the exact signature point, to represent a binary zero or one. The pixels surrounding each signature point, in approximately a 5×5 to 10×10 grid, are preferably adjusted proportionally to ensure a continuous transition to the new value at the signature point. A number of bits are encoded in the signature points to form a pattern which is the signature for the image.




In a preferred embodiment, the signature is a pattern of all of the signature points. When auditing a subject image, if a statistically significant number of potential signature points in the subject image match corresponding signature points in the signed image, then the subject image is deemed to be derived from the signed image. A statistically significant number is somewhat less than 100%, but enough to be reasonably confident that the subject image was derived from the signed image.




In an alternate embodiment, the signature is encoded using a redundant pattern that distributes it among the signature points in a manner that can be reliably retrieved using only a subset of the points. One embodiment simply encodes a predetermined number of exact duplicates of the signature. Other redundant representation methods, such as an error-correcting code, may also be used.




In order to allow future auditing of images to determine whether they match the signed image, the signature is stored in a database in which it is associated with the original image. The signature can be stored by associating the bit value of each signature point together with x-y coordinates of the signature point. The signature may be stored separately or as part of the signed image. The signed image is then distributed in digital form.




As discussed above, the signed image may be transformed and manipulated to form a derived image. The derived image is derived from the signed image by various transformations, such as resizing, rotating, adjusting color, brightness and/or contrast, cropping and converting to print or film. The derivation may take place in multiple steps or processes or may simply be the copying of the signed image directly.




It is assumed that derivations of these images that an owner wishes to track include only applications which substantially preserve the resolution and general quality of the image. While a size reduction by 90%, a significant color alteration or distinct-pixel-value reduction may destroy the signature, they also reduce the image's significance and value such that no auditing is desired.




In order to audit a subject image according to a preferred embodiment, a user identifies the original image of which the subject image is suspected of being a duplicate. For a print or film image, the subject image is scanned to create a digital image file. For a digital image, no scanning is necessary. The subject digital image is normalized using techniques as described below to the same size, and same overall brightness, contrast and color profile as the unmodified original image. The subject image is analyzed by the method described below to extract the signature, if present, and compare it to any signatures stored for that image.




The normalization process involves a sequence of steps to undo transformations previously made to the subject image, to return it as close as possible to the resolution and appearance of the original image. It is assumed that the subject image has been manipulated and transformed as described above. To align the subject image with the original image, a preferred embodiment chooses three or more points from the subject image which correspond to points in the original image. The three or more points of the subject image are aligned with the corresponding points in the original image. The points of the subject image not selected are rotated and resized as necessary to accommodate the alignment of the points selected.




For example,

FIG. 5

shows a digital subject image


38


that is smaller than the original image


24


shown in FIG.


2


. To resize the subject image, a user points to three points such as the mouth


40


B, ear


42


B and eye


44


B of the subject image using the mouse


18


or other pointer. Since it is usually difficult to accurately point to a single pixel, the computer selects the nearest extrema to the pixel pointed to by the user. The user points to the mouth


40


A, ear


42


A, and eye


44


A of the original image. The computer


12


resizes and rotates the subject image as necessary to ensure that points


40


B,


42


B, and


44


B are positioned with respect to each other in the same way that points


40


A,


42


A, and


44


A are positioned with respect to each other in the original image. The remaining pixels are repositioned in proportion to the repositioning of points


40


B,


42


B and


44


B. By aligning three points the entire subject image is aligned with the original image without having to align each pixel independently.




After the subject image is aligned, the next step is to normalize the brightness, contrast and/or color of the subject image. Normalizing involves adjusting pixel values of the subject image to match the value-distribution profile of the original image. This is accomplished by a technique analogous to that used to align the subject image. A subset of the pixels in the subject image are adjusted to equal corresponding pixels in the original image. The pixels not in the subset are adjusted in proportion to the adjustments made to the pixels in the subset. The pixels of the subject image corresponding to the signature points should not be among the pixels in the subset. Otherwise any signature points in the subject image will be hidden from detection when they are adjusted to equal corresponding pixels in the original image.




In a preferred embodiment, the subset includes the brightest and darkest pixels of the subject image. These pixels are adjusted to have luminance values equal to the luminance values of corresponding pixels in the original image. To ensure that any signature points can be detected, no signature points should be selected during the signature embedding process described above that are among the brightest and darkest pixels of the original image. For example, one could use pixels among the brightest and darkest 3% for the adjusting subset, after selecting signature points among less than the brightest and darkest 5% to ensure that there is no overlap.




When the subject image is fully normalized, it is preferably compared to the original image. One way to compare images is to subtract one image from the other. The result of the subtraction is a digital image that includes any signature points that were present in the subject image. These signature points, if any, are compared to the stored signature points for the signed image. If the signature points do not match, then the subject image is not an image derived from the signed image, unless the subject image was changed substantially from the signed image.




In an alternative embodiment, the normalized subject image is compared directly with the signed image instead of subtracting the subject image from the original image. This comparison involves subtracting the subject image from the signed image. If there is little or no image resulting from the subtraction, then the subject image equals to the signed image, and therefore has been derived from the signed image.




In another alternate embodiment, instead of normalizing the entire subject image, only a section of the subject image surrounding each potential signature point is normalized to be of the same general resolution and appearance as a corresponding section of the original image. This is accomplished by selecting each potential signature point of the subject image and selecting sections surrounding each potential signature point. The normalization of each selected section proceeds according to methods similar to those disclosed above for normalizing the entire subject image.




Normalizing each selected section individually allows each potential signature point of the subject image to be compared directly with a corresponding signature point of the signed image. Preferably, an average is computed for each potential signature point by averaging the pixel value of the potential signature point with the pixel values of a plurality of pixels surrounding the potential signature point. The average computed for each signature is compared directly with a corresponding signature point of the signed image.




While the methods of normalizing and extracting a signature from a subject image as described above are directed to luminance values, similar methods may be used for color values. Instead of or in addition to normalizing by altering luminance values, the color values of the subject image can also be adjusted to equal corresponding color values in an original color image. However, it is not necessary to adjust color values in order to encode a signature in or extract a signature from a color image. Color images use pixels have pixel values that include luminance values and color values. A digital signature can be encoded in any pixel values regardless of whether the pixel values are luminance values, color values, or any other type of pixel values. Luminance values are preferred because alterations may be made more easily to luminance values without the alterations being visible to the human eye.




From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.



Claims
  • 1. A method of marking a work of authorship without apparent evidence of data alteration, the work being represented by a set of data elements, the method comprising:providing first plural-bit data; computing from the first plural-bit data additional, error correcting data corresponding thereto; and altering the values of at least certain of said data elements in accordance with composite data comprising the first data and the error correcting data, including increasing the value of a first data element, decreasing the value of a second data element, increasing the values of other data elements adjoining the first data element, and reducing the values of other data elements adjoining the second data element.
  • 2. A method of marking a work of authorship without apparent evidence of data alteration, the work being represented by a set of data elements, the method comprising providing first plural-bit data, computing from the first plural-bit data additional, error correcting data corresponding thereto; and altering the values of at least certain of said data elements in accordance with composite data comprising the first data and the error correcting data, including altering said certain data elements by an amount based, at least in part, on initial values thereof.
  • 3. The method of claim 2 in which said amount is ten percent or less.
  • 4. A method of marking a work of authorship without apparent evidence of data alteration, the work being represented by a set of data elements, the method comprising providing first plural-bit data, computing from the first plural-bit data additional, error correcting data corresponding thereto; and altering the values of at least certain of said data elements in accordance with composite data comprising the first data and the error correcting data; wherein at least certain excerpts of the work are not marked because marking in such excerpts would more likely be conspicuous.
TECHNICAL FIELD

This application is a continuation of application Ser. No. 09/317,784, filed May 24, 1999, now U.S. Pat. No. 6,072,888, application Ser. No. 09/074,632, filed May 7, 1998, now U.S. Pat. No. 5,930,377, which is a continuation of application Ser. No. 08/969,072, filed Nov. 12, 1997, now U.S. Pat. No. 5,809,160, which is a continuation of application Ser. No. 07/923,841, filed Jul. 31, 1992, now U.S. Pat. No. 5,721,788.

US Referenced Citations (78)
Number Name Date Kind
2630525 Tomberlin et al. Mar 1953
3406344 Hopper Oct 1968
3562420 Thompson Feb 1971
3638188 Pincoffs et al. Jan 1972
3838444 Loughlin et al. Sep 1974
3845391 Crosby Oct 1974
3914877 Hines Oct 1975
3984624 Waggener Oct 1976
4225967 Miwa et al. Sep 1980
4230990 Lert, Jr. et al. Oct 1980
4231113 Blasbalg Oct 1980
4237484 Brown et al. Dec 1980
4238849 Gassmann Dec 1980
4310180 Mowry, Jr. et al. Jan 1982
4313197 Maxemchuk Jan 1982
4367488 Leventer et al. Jan 1983
4379947 Warner Apr 1983
4389671 Posner et al. Jun 1983
4425642 Moses et al. Jan 1984
4425661 Moses et al. Jan 1984
4488245 Dalke et al. Dec 1984
4495620 Steele et al. Jan 1985
4528588 Löfberg Jul 1985
4644582 Morishita et al. Feb 1987
4672605 Hustig et al. Jun 1987
4677466 Lert, Jr. et al. Jun 1987
4697209 Kiewit et al. Sep 1987
4703476 Howard Oct 1987
4750173 Blüthgen Jun 1988
4775901 Nakano Oct 1988
4825393 Nishiya Apr 1989
4855827 Best Aug 1989
4866771 Bain Sep 1989
4876617 Best et al. Oct 1989
4908836 Rushforth et al. Mar 1990
4920503 Cook Apr 1990
4939515 Adelson Jul 1990
4941150 Iwasaki Jul 1990
4943973 Werner Jul 1990
4943976 Ishigaki Jul 1990
4963998 Maufe Oct 1990
4969041 O'Grady et al. Nov 1990
4972471 Gross et al. Nov 1990
4979210 Nagata et al. Dec 1990
5063446 Gibson Nov 1991
5067162 Driscoll, Jr. et al. Nov 1991
5073899 Collier et al. Dec 1991
5075773 Pullen et al. Dec 1991
5079648 Maufe Jan 1992
5083224 Hoogendoorn et al. Jan 1992
5093867 Hori et al. Mar 1992
5103459 Gilhousen et al. Apr 1992
5113437 Best et al. May 1992
5134496 Schwab et al. Jul 1992
5146457 Veldhuis et al. Sep 1992
5161210 Druyvesteyn et al. Nov 1992
5200822 Bronfin et al. Apr 1993
5212551 Conanan May 1993
5228056 Schilling Jul 1993
5243423 DeJean et al. Sep 1993
5257119 Funada et al. Oct 1993
5278400 Appel Jan 1994
5315098 Tow May 1994
5319735 Preuss et al. Jun 1994
5327237 Gerdes et al. Jul 1994
5337361 Wang et al. Aug 1994
5374976 Spannenburg Dec 1994
5394274 Kahn Feb 1995
5410598 Shear Apr 1995
5436653 Ellis et al. Jul 1995
5453968 Veldhuis et al. Sep 1995
5510900 Shirochi et al. Apr 1996
5537216 Yamashita et al. Jul 1996
5541741 Suzuki Jul 1996
5719984 Yamagata et al. Feb 1998
5721788 Powell et al. Feb 1998
5790932 Komaki et al. Aug 1998
5907443 Hirata May 1999
Foreign Referenced Citations (8)
Number Date Country
29 43 436 May 1981 DE
058482A1 Aug 1982 EP
0372601A1 Jun 1990 EP
0 493 091 Jul 1992 EP
0493091A1 Jul 1992 EP
2063018A May 1981 GB
2196167A Apr 1988 GB
WO8908915A1 Sep 1989 WO
Non-Patent Literature Citations (30)
Entry
Gabor, et al., “Theory of Communication,” J. Inst. Elect. Eng. 93, 1946, pp. 429-441.
Roberts, “Picture Coding Using Pseudorandom Noise,”IRE Trans. on Information Theory, vol. 8, No. 2, Feb., 1992, pp. 145-154.
Jain, “Image Coding Via a Nearest Neighbors Image Model,” IEEE Transactions on Communications, vol. COM-23, No. 3, Mar. 1975, pp. 318-331.
Szepanski, “Optimization of Add-On Signals by Means of a Modified Training Algorithm for Linear Classifiers,” IEEE Int'l Symp. On Info. Theory, Oct. 10, 1997, pp. 27-28.
Szepanski, “Binary Data Transmission Over Video Channels with Very Low Amplitude Data Signals,” Fernseh-und Kino-Technik, vol.32, No.7, Jul., 1978, pp. 251-256. (German Text with full English translation.)
Szepanski, “A Signal Theoretic Method for Creating Forgery-Proof Documents for Automatic Verification,” Proceedings 1979 Carnahan Conference on Crime Countermeasures, May 16 1979, pp. 101-109.
Pickholtz et al., “Theory of Spread-Spectrum Communications—A Tutorial,” Transactions on Communications, vol. COM-30, No. 5, May, 1982, pp. 855-884.
Wagner, “Fingerprinting,” 1983 IEEE, pp. 18-22.
Sklar, “A Structured Overview of Digital Communications—a Tutorial Review—Part I,” IEEE Communications Magazine, Aug., 1983, pp. 1-17.
Sklar, “A Structured Overview of Digital Communications—a Tutorial Review—Part II,” IEEE Communications Magazine, Oct., 1983, pp. 6-21.
Sheng et al., “Experiments On Pattern Recognition Using Invariant Fourier-Mellin Descriptors,” Journal of Optical Society of America, vol. 3, No. 6, Jun., 1986, pp. 771-776.
Castro et al., “Registration of Translated and Rotated Images Using Finite Fourier Transforms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, No. 5, Sep. 1987, pp. 700-703.
Nakamura et al., “A Unified Coding Method of Dithered Image and Text Data Using Micropatterns,” Electronics and Communications in Japan, Part 1, vol. 72, No. 4, 1989, pp. 50-56.
Kassam, Signal Detection in Non-Gaussian Noise, Dowden & Culver, 1988, pp. 1-96.
Komatsu et al., “A Proposal on Digital Watermark in Document Image Communication and Its Application to Realizing a Signature,” Electronics and Communications in Japan, Part 1, vol. 73, No. 5, 1990, pp. 23-33.
Nakamura et al., “A Unified Coding Method of Image and Text Data Uisng Discrete Orthogonal Transform,” Systems and Computers in Japan, vol. 21, No. 3, 1990, pp. 87-92.
Tanaka, “Embedding the Attribute Information Into a Dithered Image,” Systems and Computers in Japan, vol. 21, No. 7, 1990, pp. 43-50.
Arazi, et al., “Intuition, Perception, and Secure Communication,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, No. 5, Sep./Oct. 1989, pp. 1016-1020.
Schreiber et al., “A Compatible High-Definition Television System Using the Noise-Margin Method of Hiding Enhancement Information,” SMPTE Journal, Dec. 1989, pp. 873-879.
Tanaka et al., “New Integrated Coding Schemes for Computer-Aided Facsimile,” Proc. IEEE Int'l Conf. on Sys. Integration, Apr. 1990, pp. 275-281.
Tanaka et al., “Embedding Secret Information Into a Dithered Multi-Level Image,” Proc. IEEE Military Comm. Conf., Sep. 1990, pp. 216-220.
Tanaka et al., “A Visual Retrieval System with Private Information for Image Database,” International Conference on DSP Applications and Technology, Oct. 1991, pp. 415-421.
Szepanski, “Compatibility Problems in Add-on Data Transmission for TV-Channels,” 2nd Symp. and Tech. Exh. on Electromagnetic Compatibility, Jun. 1977, pp. 263-268.
Szepanski, Additive binary Data Transmission for Video Signals, Conference of the Communications Engineering Society, 1980, NTG Technical Reports, vol. 74, pp. 343-351. (German text and English translation enclosed).
Szepanski, “A Signal Theoretic Method for Creating Forgery-Proof Documents for Automatic Verification,” Proc. 1979 Carnahan Conf. on Crime Countermeasures, Univ. of Kentucky, Lexington, KY, May 16-18, 1979, pp. 101-109.*
Nakamura et al., “A Unified Coding Method of Dithered Image and Text Data Using Micropatterns,” Electronics and Communications in Japan, Part 1, vol. 72, No. 4, 1989, pp. 50-56.*
Wagner, “Fingerprinting,” IEEE, 1983, pp. 18-22.*
Tanaka et al., “Embedding the Attribute Information into a Dithered Image,” Systems and Computers in Japan, vol. 21, No. 7, 1990, pp. 43-50.*
Komatsu et al., “Authentication System Using Concealed Image in Telematics,” Memoirs of the School of Science and Engineering, Waseda Univ., No. 52, 1988, pp. 45-60.*
Komatsu et al., “A Proposal on Digital Watermarking in Document Image Communication and Its Application to Realizing a Signature,” Electronics and Communications in Japan, Part 1, vol. 73, No. 5, 1990, pp. 23-23.
Continuations (4)
Number Date Country
Parent 09/317784 May 1999 US
Child 09/432532 US
Parent 09/074632 May 1998 US
Child 09/317784 US
Parent 08/969072 Nov 1997 US
Child 09/074632 US
Parent 07/923841 Jul 1992 US
Child 08/969072 US