Digital watermarking in data representing color channels

Abstract
The present disclosure relates to digital watermarking. One claim recites a method to detect two or more different digital watermarks in media. The method includes: receiving captured imagery of the media, the captured imagery comprising a plurality of image frames; for a first image frame applying a first watermark detector to search for a first digital watermark hidden within the first image frame, in which an electronic processor is programmed as the first watermark detector; and for a second image frame applying a second, different watermark detector to search for a second, different watermark hidden within the second image frame, in which an electronic processor is programmed as the second watermark detector. Other claims and combinations are provided too.
Description
FIELD

The present disclosure relates to hiding data in color channels.


BACKGROUND AND SUMMARY

The above mentioned parent applications disclose various techniques for embedding and detecting of hidden digital watermarks.


Digital watermarking technology, a form of steganography, encompasses a great variety of techniques by which plural bits of digital data are hidden in some other object, preferably without leaving human-apparent evidence of alteration.


Digital watermarking may be used to modify media content to embed a machine-readable code into the media content. The media may be modified such that the embedded code is imperceptible or nearly imperceptible to the user, yet may be detected through an automated detection process.


Digital watermarking systems typically have two primary components: an embedding component that embeds the watermark in the media content, and a reading component that detects and reads the embedded watermark. The embedding component embeds a watermark pattern by altering data samples of the media content. The reading component analyzes content to detect whether a watermark pattern is present. In applications where the watermark encodes information, the reading component extracts this information from the detected watermark. Assignee's U.S. patent application Ser. No. 09/503,881, filed Feb. 14, 2000 (now U.S. Pat. No. 6,614,914), discloses various encoding and decoding techniques. U.S. Pat. Nos. 5,862,260 and 6,122,403 disclose still others. Each of these U.S. patent documents is herein incorporated by reference.


Now consider our out-of-phase digital watermarking techniques with reference to FIGS. 1a and 1b. In FIG. 1a, the dash/dot C, M, Y and K lines represent, respectively, cyan, magenta, yellow and black color channels for a line (or other area) of a media signal (e.g., a picture, image, media signal, document, etc.). The FIG. 1a lines represent a base level or a particular color (or gray-scale) level (or intensity). Of course, it is expected that the color (or gray-scale) level will vary over the media signal. FIG. 1b illustrates the media of FIG. 1a, which has been embedded with an out-of-phase digital watermark signal. The watermark signal is preferably applied to each of the color component dimensions C, M and Y.


In FIGS. 1a and 1b, the M and Y channels are represented by one signal, since these color components can be approximately equal, but separate signals. Of course, it is not necessary for these components to be equal, and in many cases the yellow and magenta components are not equal. The illustrated “bumps” (or “tweaks”) in FIG. 1b represent the digital watermark signal, e.g., upward and downward signal adjustments in relation to a respective color channel at given points over the media signal. The tweaks are preferably applied at the same level (or signal strength). Alternatively, the bumps are applied with a different signal strength (or tweak level) when compared to one another. Of course, these tweaks can be embedded over a color channel in a predetermined pattern, a pseudo random fashion, a random fashion, etc., to facilitate embedding of a digital watermark signal.


For the K dimension (or channel), the digital watermark signal is preferably embedded to be out-of-phase with respect to the CMY channels. Most preferably, the K channel is approximately 180 degrees out-of-phase (e.g., inverted) with the watermark signals in the CMY color channels, as shown in FIG. 1b. For example, if a digital watermark signal modifies each of the color channels at a media' first location with a tweak level of say 7, then a tweak level of −7 correspondingly modifies the K channel at the media's first location. This digital watermark technique is referred to as our out-of-phase (or “K-phase”) digital watermark. (We note that if a watermark signal is determined in terms of luminance, we can assign or weight corresponding tweak levels to the respective color plane pixel values to achieve the luminance value tweak. Indeed, a tweak can be spread over the CMY channels to achieve a collective luminance at a given media location. The luminance attributable to the CMY tweak is preferably cancelled or offset by the luminance effect attributable to a corresponding inverted K channel tweak at the give media location. Similarly, if a watermark signal is determined in terms of chrominance, we can assign or weight corresponding tweak levels to the respective color plane pixel values to achieve the chrominance value tweak. Indeed, a tweak can be spread over the CMY channels to achieve a steady luminance at a given media location. The luminance attributable to the CMY chrominance tweaks are preferably cancelled or offset by the luminance effect attributable to a corresponding inverted K channel tweak at the give media location. Or more generally, the luminance in a given localized area is preferably steady or minimal since chrominance tweaks in a first color channel reduces luminance attributable to a chrominance tweaks in a second different color channel).


Our inventive watermarking scheme greatly reduces watermark perceptibility. Since the watermark signal for the K channel is applied approximately 180 degrees out-of-phase, when compared to the respective tweaks applied to the C, M and/or Y channels, the watermark visibility is greatly reduced. The visibility reduction is achieved by the effective cancellation of perceived luminance changes when the CMYK image is viewed or printed. Indeed, combining an inverted watermark signal “tweak” or “bump” in a K channel with a corresponding non-inverted watermark signal tweak or bump in the CMY channels effectively cancels an overall perceived luminance change for a given area (e.g., a pixel or block of pixels)—greatly reducing visibility of the digital watermark.


The present disclosure discloses a new data hiding technique based on our out-of-phase technology. According to one implementation of the present disclosure, an image is hidden in or carried by a media signal. The hiding is accomplished with our out-of-phase embedding techniques. The image can be a photograph, a graphic, a barcode (1-D or 2-D), etc., etc. Another aspect of the disclosure is used to improve the visibility characteristics of our out-of-phase embedding techniques.


The foregoing and other aspects, features and advantages of the present disclosure will be even more apparent from the following detailed description, which proceeds with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1a is a diagram illustrating CMYK channels; and FIG. 1b illustrates the color CMYK channels of FIG. 1a, embedded with information.



FIG. 2 illustrates hiding an image in media.



FIG. 3 is a flow diagram illustrating an embedding method according to one implementation of the present disclosure.



FIGS. 4 and 5 are graphs showing hidden signal strength in terms of luminance.



FIGS. 6 and 7 are graphs showing hidden signal strength in terms of color saturation.



FIG. 8 illustrates limiting a signal tweak in low CMY areas to reduce hidden signal visibility.



FIG. 9 illustrates the segmentation of media into blocks.



FIG. 10 illustrates a feedback loop in an embedding process.



FIG. 11 illustrates feedback for the FIG. 10 feedback loop.



FIGS. 12a and 12b illustrate detection apparatus.



FIG. 13 illustrates orientation fiducials hidden in a media signal with our out-of-phase embedding techniques.



FIG. 14 illustrates out-of-phase embedding of a spot color.



FIG. 15 illustrates a printer calibration process.



FIG. 16 illustrates embedding in a blue channel with offsetting embedding occurring in red and green channels.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Image Embedding


With reference to FIG. 2, an image 10 is steganographically hidden within media 12. Of course, media 12 may represent digital media such as an image, photograph, video frame, graphic, picture, logo, product tag, product documentation, visa, business card, art work, brochure, document, product packaging, trading card, banknote, deed, poster, ID card (including a driver's license, member card, identification card, security badge, passport, etc.), postage stamp, etc., etc. And image 10 can correspond to a digital representation of a photograph, picture, graphic, text, orientation fiducial, object, barcode, message, digital watermark, outline, symbol, etc., etc. In the FIG. 2 example, image 10 includes a close-up photograph, and the media includes a driver's license or passport photograph. The hiding (or embedding) is accomplished using our inventive out-of-phase techniques.


With reference to FIG. 3, our K-phase hiding is preferably initiated by converting image 10 to a black channel image 10′ (step 30FIG. 3). Most digital imaging software tools such as Adobe's Photoshop facilitate such a black channel conversion. The black channel image 10′ includes a set of black pixel values (e.g., gray-scale values) 10′. A location in the media 12 is selected to place the black channel image (step 32). The dashed circle 13 in FIG. 2 represents this selected location. The media 12 location can be represented by sets of media 12 pixels. (For example, a first set of pixels corresponds to the selected location's black channel values, a second set corresponds to the selected location's cyan channel values, a third set corresponds to the selected location's magenta channel values, and a fourth set corresponds to the selected location's yellow channel values). The set of black channel image 10′ values is applied to the black channel pixels in the selected location of media 12—effectively modifying media 12 (step 34). For example, if an image 10′ pixel includes a gray-scale value of 3, this gray-scale value is applied to a corresponding pixel in the selected media 12 location to raise that corresponding pixel value by 3. In an alternative implementation, instead of adjusting the corresponding pixel in the selected media 12 location by the gray-scale value, we replace that corresponding pixel value with the black image 10′ gray-scale value. In another implementation, the corresponding media 12 pixel is modified to achieve the gray-scale value of the image 10′ pixel. Of course we can scale and/or weight the gray-scale value as needed prior to modifying pixels in the selected location of media 12.


The black channel image 10′ is inverted to produce a set of signal tweaks (step 36). For example, if a black channel pixel is tweaked by a grayscale value of say 24, then a corresponding, inverted CMY tweak value is −24. (As an alternative implementation, image 10 is converted into corresponding C, M and Y images and such images are applied to their respective channels. These signal tweaks are then used to modify or change the color values in their respective CMY color channels (step 38). Most preferably, in the above example, the −24-tweak value is applied to each of the CMY color channels. The overall luminance cancellation can be effected as such. In another implementation we unevenly spread the tweak value over the CMY channels to achieve an overall luminance change in a given media location to cancel the +24 tweak in the black channel. For example, if using a luminance equation of: L=0.3*C+0.6*M+0.1*Y, we can achieve an overall luminance tweak of −24 by tweaking C=−15, M=−30 and Y=−15. Of course there is a vast range of other color combinations to achieve the same collective luminance change. Care should be taken, however, to minimize a color shift when using this tweak-spreading alternative. The CMY pixels and the K pixels are thus out-of-phase with respect to one another—resulting in a local cancellation of the perceived luminance change. Accordingly, image 10 is successfully hidden or carried by media 12.


The selected location 13 can be determined manually, e.g., via editing software tools (like Adobe's Photoshop). Or the selection process can be automated.


Image Hiding Enhancements


We have developed improvements to enhance our out-of-phase hiding techniques. These improvements apply to hiding both images and digital watermark signals (in this section both will be referred to as a hidden signal). While these techniques are not necessary to carry out our out-of-phase hiding techniques, they generally reduce the visibility of a hidden signal. Consider our following inventive improvements.


High Luminance Areas


Media 12 may include areas of low CMY and/or K ink (or signal intensity). In a first case, an area includes little or no C, M and/or Y ink. This results in an inability to counteract (or cancel) an inverted signal in a corresponding channel(s). Accordingly, we can sample the luminance of a media 12 area (or pixel) and, based on the luminance level, determine whether to scale back the hidden signal strength. For example, we begin to scale back the signal strength once the luminance reaches a predetermined threshold (e.g., in a range of 70-95% luminance). We can scale back the signal strength for a given area according to a linear reduction, as shown in FIG. 4, or we can scale the signal strength in a non-linear manner, e.g., as shown in FIG. 5. The illustrated scaling signal strength applies to both the K channel and CMY channels. In a related implementation, we determine the luminance of the yellow channel. We base our scaling decisions on the yellow luminance percentage.


Saturated Color


Hiding signals in a saturated color area can also result in increased hidden signal visibility concerns. For this document the term “saturation” refers to how pure a color is, or refers to a measure of color intensity. For example, saturation can represent the degree of color intensity associated with a color's perceptual difference from a white, black or gray of equal lightness. We determine the color saturation level in a color plane (e.g., the yellow color plane), and then scale back a hidden signal strength as the color saturation level exceeds a predetermined level (e.g., 80% yellow color saturation). As with the FIGS. 4 and 5 implementations, we can scale the signal strength in a linear manner (FIG. 6) or in a non-linear manner (FIG. 7).


Low or High Luminance Areas


We have found that we can even further improve the visibility characteristics of our hidden signals by considering the amount of luminance at a given pixel or other media 12 area. A low luminance may indicate that there is insufficient CMY to compensate for a K channel tweak. For example, a 10% luminance in CMY for a given pixel implies that the pixel can accommodate only about a 10% signal tweak (e.g., remember the simplified luminance relationship mentioned above: L=0.3*C+0.6*M+0.1*Y). With reference to FIG. 8, we can cap (or limit) the positive K tweak signal level in such low CMY areas to ensure that the CMY levels can be sufficiently decreased to counteract or cancel the positive K channel signal.


Similarly, in an area of high CMY luminance, a negative K channel tweak can be capped (or limited) to ensure a sufficient range to increase the CMY values.


Equalizing Detectability


Now consider an implementation where media 12 is segmented into a plurality of blocks (FIG. 9). Here a block size can range from a pixel to a group of pixels. We redundantly embed an image or watermark signal in each of (or a subset of) the blocks. As shown in FIG. 10, we preferably use signal feedback (k) to regulate the embedding process. A signal feedback (k) method is shown in FIG. 11. A black (K) channel image or watermark signal (in this section hereafter both referred to as a “watermark”) is embedded in block i of media 12 (step 110), where “i” is an integer ranging from 1-n and where n is the total number of blocks. The watermark signal is inverted (step 112) and embedded in the CMY channels of block i (step 114). At this point, we preferably perform a detection process of the signal embedded within the i.sup.th block (step 116). The detection process determines whether the signal is sufficiently detectable (step 118). The term “sufficient” in this context can include a plurality of levels. In one, “sufficient” implies that the signal is detectable. In another, the detectability of the signal is ranked (e.g., according to error correction needed, ease of detectability, or a detection-reliability metric, etc.). The term sufficient in a ranking context also implies that the detection ranking is above a predetermined threshold. The process moves to embed a new block i+1 if the embedding is sufficient (120). Otherwise the signal strength is increased or otherwise altered (step 122) and the embedding of block i is repeated.


Such a signal feedback process helps to ensure consistent embedding throughout media 12.


Infrared Image Detection


An infrared detection method is illustrated with reference to FIG. 12a. In particular, the illustrated detection method employs infrared illumination to facilitate image (or watermark) detection. Media 12 is illuminated with an infrared illumination source 14. The media 12 is embedded as discussed above, for example, to include various components in a multicolor dimension space (e.g., CMYK). A first component (or image) is preferably embedded in the CMY channels. A second component (or image) is embedded in the K channel. The second component is preferably inverted (or is out-of-phase) with respect to the CMY channels.


Infrared illumination source 14 preferably includes a light emitting diode, e.g., emitting approximately in a range of 800 nm-1400 nm, or a plurality of light emitting diodes (“LED”). Of course, there are many commercially available infrared diodes, and such may be suitable used with our present detection techniques. It will be appreciated that many commercially available incandescent light sources emit light both in the visible and infrared (“IR”) spectrums. Such incandescent light sources may alternatively be used as infrared illumination source 14. Indeed, infrared watermark detection may be possible in otherwise normal (“daylight”) lighting conditions, particularly when using an IR-pass filter.


A conventional power source powers the infrared illumination source. (We note that a variable trim resistor and a small wall transformer can be optionally employed to control illumination source 14). Power alternately can be supplied from a battery pack, voltage or current source, or by directly tapping a power source of a camera, e.g., internally drawn from a parallel, USB, or corded power lines. For a consumer device, a battery pack or a single power cord that is stepped down inside a digital watermark reader housing can also be used.


Returning to the composition of an out-of-phase hidden image (or watermark), a first image (or watermark) component is embedded in a K (or black) channel. A second image component, e.g., which is out-of-phase with respect to the K channel, is embedded in the CMY channels. These characteristics have significance for infrared detection. In particular, C, M and Y inks will typically have high transmission characteristics in the infrared spectrum when printed, which render them nearly imperceptible under infrared illumination. Yet conventional black inks absorb a relatively high amount of infrared light, rendering the black channel perceptible with infrared illumination. We note that standard processing inks, such as those conforming to the standard web offset press (SWOP) inks, include black ink with IR detection properties. Of course, there are many other inks that may be suitably interchanged in the present disclosure.


As discussed above our out-of-phase embedding provides an effective cancellation of perceived luminance changes when the CMYK image is viewed in the visible spectrum. Indeed, combining an inverted watermark signal “tweak” or “bump” in a K channel with a corresponding non-inverted watermark signal tweak or bump in the CMY channels effectively cancels an overall perceived luminance change. However, under infrared illumination, the hidden image (or watermark) component in the black (K) channel becomes perceptible without interference from the C, M and Y channels. An infrared image primarily portrays (e.g., emphasizes) the black channel, while the C, M and Y channels are effectively imperceptible under infrared illumination.


In one implementation, camera 16 captures an image of media 12. Preferably, camera 16 includes an IR-Pass filter that passes IR while filtering visible light. For example, the Hoya RM90 filter available from M&K Optics L.L.C. is one of many IR-Pass/Visible Opaque filters suitable for daylight detection. Another suitable filter is the RG850 filter, part number NT54-664, available from Edmund Scientific. These filters are offered as examples only, and certainly do not define the entire range of suitable IR-pass filters. Of course there are many other IR-Pass filters that are suitably interchangeable with the present disclosure.


In yet another implementation, a conventional digital camera (or web cam) is modified so as to capture infrared light. In particular, most digital cameras and web cams include an IR filter, which filters out IR light. Removing the IR filter allows the camera to capture light in the IR spectrum. Consider a visibly dark environment (e.g., an enclosed case, shielded area, dark room, etc.). Media 12 is illuminated by infrared illumination source 14 in the visibly dark environment. Camera 16 (without an IR filter) effectively captures an infrared image (i.e., the K channel image) corresponding to the illuminated media 12.


The captured image is communicated to computer 18. Preferably, computer 18 includes executable software instructions stored in memory for execution by a CPU or other processing unit. If media 12 includes a digital watermark, the software instructions preferably include instructions to detect and decode the embedded digital watermark. Otherwise, the instructions preferably include instructions to display the K-phase image. The software instructions can be stored in memory or electronic memory circuits. Of course, computer 18 can be a handheld computer, a laptop, a general-purpose computer, a workstation, etc. Alternatively, computer 18 includes a hard-wired implementation, which precludes the need for software instructions.


With reference to FIG. 12b, a detection housing 20 can be provided to house an infrared illumination source 14 and digital camera (both not shown in FIG. 12b, since they are within the opaque housing 20). The housing 20 is preferably opaque to shield (or otherwise constructed to filter) the camera and media 12 from visible light. The housing 20 has an opening 20a to receive the media 12. In a first case, opening 20a is adapted to engulf media 12. This allows media 12 to be placed on a surface (e.g., table, imaging station, or counter) and the housing opening 20a to be placed over media 12, effectively shielding media 12 from visible light. In a second case, the opening 20a receives media 12 into (e.g., slides media through opening 20a) and positions media 12 within the opaque housing 20. In either implementation, the infrared illumination source 14 illuminates media 12, or the digital camera 12 captures an image of the illuminated media (e.g., captures as image of the K-channel image). The digital camera 12 communicates with computing device 14, which detects and decodes a digital watermark embedded with media 12, if present, or otherwise displays the image.


In another illustrative embodiment, the above described infrared detection technique is carried out in a visibly dark environment, such as a dark room, shielded area, etc. An out-of-phase image (or digital watermark) is embedded in media. The media is illuminated with an infrared illumination source, and a digital camera captures an image of the illuminated media.


In still another illustrative embodiment, the above described infrared detection technique is carried out in a visibly lighted environment. An out-of-phase image (or watermark) is embedded in media. The media is illuminated with an infrared illumination source, and a digital camera captures an image of the media. Preferably, the camera includes an IR-pass filter. The digital camera communicates with a computing device, which detects and decodes an out-of-phase image (or digital watermark) embedded in the media.


Infrared detection is an elegant solution to detect out-of-phase images or digital watermarks, since high transmission colors in the IR spectrum are effectively washed out, allowing detection of a low transmission color channel. Specialized inks are not required to embed the out-of-phase digital watermark. Indeed most multicolor printer ink packs, offset ink, process inks, dye diffusion thermal transfer inks, such as inks conforming to the SWOP standard include black inks that allow infrared detection. Some of these inks include a carbon-based black ink, furthering the absorption of IR. While infrared detection is ideal for out-of-phase images or digital watermarks, this method is also applicable to detection of conventional digital watermarks. For instance, a watermark signal can be embedded only in a black channel of media. Infrared illumination helps to reveal the embedded watermark in this black channel. Alternatively, a digital watermark is embedded across many color planes, while detection is carried out in only those color planes that are perceptible with IR illumination. Additionally, while we have discussed infrared detection techniques, we note that ultraviolet (UV) detection is also possible. In this case, one of the color channels (including the K channel) preferably includes UV pigments or properties. A UV detection process is carried out in a manner analogous to that discussed above. (We also note that a CMY color can include IR/UV pigments or properties to facilitate detection of that color with respective IR or UV detection methods).


Applications


Now consider a few applications of our inventive out-of-phase hiding techniques.


Identification Documents (e.g., Passports, Driver's Licenses, Etc.)


An out-of-phase image is hidden in an identification document to provide enhanced security. For example, a hidden image is a gray-scale version of the identification document's photograph. An airport screener, or law enforcement officer, illuminates the out-of-phase image with infrared (or ultraviolet) light for comparison of the hidden image to the printed photograph. Or, instead of a photograph, the hidden image may include text, which can be compared with the visibly printed text on the identification document.


In assignee's U.S. Published Patent Application No. US 2002-0170966 A1, we disclosed various security and authentication improvements. One disclosed improvement ties machine-readable code such as barcode information to a digital watermark. Our inventive out-of-phase hiding techniques can be used with the techniques disclosed in the above-mentioned application. For example, instead of hiding an out-of-phase image in the identification document, we instead embedded an out-of-phase digital watermark. The digital watermark includes a payload, which has information corresponding to the printed information or to information included in a barcode. In one implementation, the information includes a hash of the barcode information. In another implementation, we hid a barcode in the identification document as discussed below.


Hiding Bar Codes in Out-of-Phase Channels


Over the years, a number of standards organizations and private entities have formed symbology standards for bar codes. Some examples of standards bodies include the Uniform Code Council (UCC), European Article Numbering (EAN, also referred to as International Article Numbering Association), Japanese Article Numbering (JAN), Health Industry Bar Coding Counsel (HIBC), Automotive Industry Action Group (AIAG), Logistics Application of Automated Marking and Reading Symbols (LOGMARS), Automatic Identification Manufacturers (AIM), American National Standards Institute (ANSI), and International Standards Organization (ISO).


The UCC is responsible for the ubiquitous bar code standard called the Universal Product Code (UPC). AIM manages standards for industrial applications and publishes standards called Uniform Symbology Standards (USS). Some well know bar code schemes include UPC and UCC/EAN-128, Codabar developed by Pitney Bowes Corporation, 12 of 5 and Code 128 developed by Computer Identics, Code 39 (or 3 of 9) developed by Intermec Corporation, and code 93.


Some bar codes, such as UPC, are fixed length, while others are variable length. Some support only numbers, while others support alphanumeric strings (e.g., Code 39 supports full ASCII character set). Some incorporate error checking functionality.


While the bar codes listed above are generally one-dimensional in that they consist of a linear string of bars, bar codes may also be two-dimensional. Two dimensional bar codes may be in a stacked form (e.g., a vertical stacking of one-dimensional codes), a matrix form, a circular form, or some other two-dimensional pattern. Some examples of 2D barcodes include code 49, code 16 k, Data Matrix developed by RVSI, QR code, micro PDF-417 and PDF-417.


For more information on bar codes, see D. J. Collins, N. N. Whipple, Using Bar Code-Why It's Taking Over, (2d ed.) Data Capture Institute; R. C. Palmer, The Bar Code Book, (3rd ed.) Helmers Publishing, Inc., and P. L. Grieco, M. W. Gozzo, C. J. Long, Behind Bars, Bar Coding Principles and Applications, PT Publications Inc., which are herein incorporated by reference.


A hidden, out-of-phase image can include a barcode. Consider the vast possibilities. A barcode is often disdained for aesthetic reasons, but a hidden, out-of-phase barcode can carry relatively large amounts of information while remaining virtually imperceptible. In one implementation, a barcode is redundantly hidden or titled throughout media using our out-of-phase embedding techniques. This allows for robust barcode detection even if only a portion of the media is recoverable. In another implementation one or more barcodes are placed in predetermined areas throughout the image. In still another implementation, a barcode reader, such as those provided by Symbol (e.g., the VS4000 and P300IMG models) or Welch Allyn (e.g., the Dolphin model), is augmented with an infrared illumination source and/or IR-filters. Once illuminated, the barcode reader detects and decodes a barcode hidden in a K channel.


Fiducials and Orientation Signal


In some digital watermarking techniques, the components of the digital watermark structure may perform the same or different functions. For example, one component may carry a message, while another component may serve to identify the location or orientation of the watermark in a signal. This orientation component is helpful in resolving signal distortion issues such as rotation, scale and translation. (Further reference to orientation signals can be made, e.g., to previously mentioned U.S. application Ser. No. 09/503,881). In some cases, channel capacity is congested by an orientation signal.


One improvement is to embed an orientation signal using our out-of-phase hiding techniques. The message component of a digital watermark can then be embedded using out-of-phase or non-out-of-phase embedding techniques. This improvement will increase message capacity, while improving visibility considerations. Scale, orientation, and image translation can be resolved based on the orientation of the fiducial.


A related improvement embeds a plurality of fiducials or orientation markers 54 in an out-of-phase channel of media 12 (FIG. 13). A watermark detection module detects the fiducials to identify distortion.


Spot Colors


We have found that our inventive techniques are not limited to process colors. Indeed, our out-of-phase techniques can be extended to spot colors. (See Assignee's U.S. patent application Ser. No. 10/074,677, filed Feb. 11, 2002 (now U.S. Pat. No. 6,763,124), for a further discussion of spot colors and digitally watermarking spot colors. The U.S. Pat. No. 6,763,124 patent is hereby incorporated by reference). With reference to FIG. 14, and preferably (but not limited to) relatively darker spot colors, e.g., violets, blues, etc., we counteract a watermark signal (or image) embedded in the spot color channel with an inverted signal in a K channel. Preferably, the K channel base intensity is subtle (e.g., 0% as represented by the K channel base level dashed line in FIG. 14) in comparison to the base level spot color intensity (e.g., 100% intensity as represented by the spot color maximum level dashed line in FIG. 14). The watermark signal (or image) signal is embedded through a combination of negative spot color tweaks and positive, offsetting, K channel tweaks. Infrared illumination facilitates detection of the K-channel watermark tweaks. (Embedding a spot color need not be limited to negative tweaks. Indeed, if the spot color is not at 100% intensity, positive spot color tweaks and corresponding negative K channel tweaks can facilitate embedding).


Paper Information and Printing Processes


Another improvement is to carry printing process information and/or paper characteristics with a digital watermark. For example, a digital watermark may include signal gain or embedding characteristics that are specific to a printing press, printing process, process ink type or paper characteristics. The digital watermark can be embedded in a digital file, which is analyzed prior to a print run. The embedding process is adjusted according to the watermark data. Or the watermark signal can be analyzed after printing one or more test copies. The signal strength or payload metric can be analyzed to determine whether the process should be adjusted.


Our out-of-phase digital watermark can be used to detect a misalignment in a printing process. With reference to FIG. 15 a printer 150 outputs a CMYK (or spot color, etc.) printed sheet 152. The printed sheet includes an out-of-phase digital watermark or image hidden therein. An input device 154 captures an image of sheet 152. Preferably, input device 154 captures a visible spectrum image of sheet 152. The input device provides the captured image (e.g., digital scan data) to a watermark detector 156. The watermark detector 156 analyzes the captured image in search of the embedded out-of-phase digital watermark. The watermark detector 156 should not be able to detect the embedded watermark if the printing of the CMY and K are aligned, due the localized cancellation of the signal tweaks (or luminance changes). The term aligned in this context implies that the CMY and K are sufficiently inverted to allow localized cancellation. A misalignment is identified if the watermark detector 156 reads the digital watermark. Such a misalignment is optionally communicated from the watermark detector 156 to the printer 150 or otherwise provided to announce the printing misalignment. Of course other alignment and color balance information can be identified from the detection of the digital watermark.


Color Channel Keys


A related inventive technique embeds a key in one color channel for decoding a watermark in a second color channel. Consider an implementation where a first digital watermark is embedded in a first color channel. The first digital watermark includes a payload including a key. The key is used to decode a digital watermark embedded in a second color plane. The term decode in this context includes providing a reference point to locate the second watermark, providing a key to unlock, decrypt, decode or unscramble the second digital watermark payload, etc. Of course this inventive technique is not limited to our out-of-phase digital watermarks.


Fragile Security


Our out-of-phase hiding techniques are fragile since a signal processing operation that combines the K channel with the CMY channels effectively cancels the hidden signal. A fragile watermark is one that is lost or degrades predictably with signal processing. Conversion to other color spaces similarly degrades the watermark signal. Take a typical scan/print process for example. Digital scanners typically have RGB image sensors to measure the image color. Scanning an out-of-phase embedded CMYK image degrades the embedded watermark due to the combination of K with CMY in a local area, effectively canceling the watermark. When the RGB image representation is converted to CMYK and printed, the watermark signal is effectively lost. Similarly, other conversions, such as to an L*a*b color space, degrade the out-of-phase watermark due to the combination of K with CMY throughout local areas. Nevertheless, the watermark signal is detectable from an original CMYK media, since the K channel can be detected separately by viewing, e.g., in the near infrared.


A fragile watermark has utility in many applications. Take counterfeiting, for example. The inventive fragile watermark is embedded in original CMYK media. If the media is copied, the embedded fragile watermark is either lost or degrades predictably. The copy is recognized as a copy (or counterfeit) by the absence or degradation of the fragile watermark. Fragile watermarks can also be used in conjunction with other watermarks, such as robust watermarks. The fragile watermark announces a copy or counterfeit by its absence or degradation, while the other robust watermark identifies author, source, links and/or conveys metadata or other information, etc. In other embodiments, a fragile watermark is an enabler. For example, some fragile watermarks may include plural-bit data that is used to enable a machine, allow access to a secure computer area, verify authenticity, and/or link to information. This plural-bit data is lost or sufficiently degrades in a copy, preventing the enabling functions.


Another inventive feature is to embed a hash or other representation of a product (e.g., product code or serial number) in a digital watermark payload or message. The digital watermark is then tied or linked directly to the product. If the product includes a barcode having the product code, such can be compared with the digital watermark.


Imperceptible Embedding


Our inventive techniques provide a very imperceptible digital watermark, particularly for printed images. One advantage of our embedding techniques is that a relatively strong signal can be inserted while still minimizing visibility to the human eye. In one implementation we take advantage of low sensitivity of the human visual system to high frequency blue/yellow (e.g., chrominance). With reference to FIG. 16 a blue signal tweak (e.g., representing a watermark component in terms of pixel color values) is calculated. The signal tweak can be in the form of a spatial image change (e.g., pixel color values) or frequency domain change. (If a frequency change, the change is preferably converted to a spatial domain adjustment so that an offsetting signal change can be determined). In fact a watermarking signal, e.g., as described in assignee's U.S. Pat. Nos. 6,614,914 and 6,122,403, can be provided and such a signal can be used as a blue channel tweak. (In actuality there will by many such tweaks spread over an image in various locations.) An inverted or offsetting signal tweak is then determined for the red and green channels at a corresponding image area (e.g., corresponding spatial or pixel location, but in the different channels). One goal of the inverted signal is to provide a resulting image with constant luminance at the various embedding areas. For each tweak in the blue change, we preferably provide an offsetting tweak in the red and/or green channels. This offsetting tweak cancels or offsets localized luminance changes attributable to the blue channel change. We have found that an inverted or offsetting signal tweak of minus ⅛ of the blue tweak that is applied to each of the red and green color channels helps maintain constant luminance in image or video areas receiving signal tweaks. (For example is a signal tweak of 16 is applied to a blue pixel or group of pixels, a minus 2 signal tweak is applied to a corresponding pixel or group of pixels in the red and green channels.) Thus, the watermark signal is effectively conveyed in chrominance (While we prefer a ⅛ tweak change in each of the red and blue channels, some luminance cancellation is found as the minus tweak values range from about 1/16 to ¼.) We sometimes—affectionately—refer to this type of digital watermark embedding as “blue phase” embedding.


The “tweaked” or embedded color channels are provided to a printer for printing. We note that most of today's printers and/or printer drivers have sophisticated color converters that convert RGB signals into CMY or CMYK signals for printing. Those of ordinary skill in the art will know of different color converting techniques as well. Our above blue phase watermarking survives this color conversion quite robustly.


Watermark detection of a printed document includes presenting the printed image to an optical scanner. The optical scanner captures scan data corresponding to the printed image, preferably including scan data representing (or converted to) red, green and blue channels. We can combine the color channels to help emphasize the watermark signal and minimize image interference. For example, we preferably scale and process the color channels per pixel color or chrominance values as follows:

Detection Signal (chrominance)=0.5*blue−0.25*(red+green)+128.


The scaling of color channels is chosen to minimize image interference (e.g., color channels are subtracted) and avoid saturation, e.g., if color data is being represented as an 8 bit value. The 128 pixel color or grayscale value helps shift a color value to avoid color saturation. Of course this shifting value can range depending on image characteristics, detector requirements, etc. For example, the shift can be in a color value (e.g., often represented as a grayscale value for a particular color channel) range of about 64-192. Acceptable detection may also occur when the blue channel is scaled in a range of 0.3-0.75 and the red+green are scaled proportionally in a range of 0.15-0.375.


Since the watermark signal is effectively conveyed in the chrominance channel, we have found that this type of watermarking is somewhat susceptible to JPEG compression. Nevertheless, while print applications are one of the main areas of application for these blue phase techniques, there are many other areas that will benefit from these techniques as well, e.g., digital cinema. Our blue phase techniques are used to embed a digital watermark signal in a video signal after it is decompressed, but before (or as) it is being projected on the screen. That is, the uncompressed data stream is feed into a digital watermark embedder. The various color channels are embedded as discussed above. The projected video includes a blue phase watermark. The watermark can include a plural-bit payload that, e.g., identifies the projector, theater, date/time, movie, etc. We can add a buffering system to ensure that the perceived video—from the paying customer's point of view—is uninterrupted.


Another application is a combination of a blue phase watermark with other types of watermarks (e.g., luminance based watermark). Chrominance and luminance are generally orthogonal. This allows for little or no interference between these types of watermarks. Different watermark components can be conveyed with each type of watermark. For example, a chrominance based watermark can include a so-called watermark orientation component while a luminance based watermark includes a message or payload that is synchronized according to the watermark orientation component. The message or payload can vary across an image (e.g., the plural-bits of the message change according to spatial location) while the orientation component remains constant. This is particularly helpful in map or geo-location applications, where different image regions represent different geo-locations. The messages or payloads can represent or link to geo-location information. The curious reader is directed to the following related applications: US 2002-0122564 A1; US 2002-0124171 A1; US 2002-0135600 A1 and US 2004-0008866 A1, which are each hereby incorporated by reference. If using two types of watermarking, a detector can be constructed that analyzes different frames under different detection protocols. For example, a first frame is analyzed according to the blue phase detection mentioned above. A second frame is analyzed to detect a luminance (or other) based watermark. A third frame is again analyzed to detect a blue phase watermark, etc.


We have also found that our blue phase watermarking provides strong detection results in many of today's handheld readers (e.g., cell phones, PDA, etc).


Of course our blue phase embedding techniques can be used with the many other implementations and features discussed in this and the incorporated by reference patent documents. For example, instead of embedding a watermark signal, we can embed an image or 2D barcode with blue phase techniques. For every blue phase change to represent an image or 2D barcode, we can introduce a corresponding and offsetting change in red and green—in hopes of maintain constant luminance in embedding areas.


CONCLUSION

Preferably, an out-of phase watermark signal is embedded 180 degrees out-of-phase with corresponding channels. However, some cancellation will still be achieved if the signal is approximately 180 degrees, for example, in a range of 0.0+−0.0-50% from the 180-degree mark. The term “inverted” includes values within this range. We note that while the present disclosure has been described with respect to CMYK process inks, the present invention is not so limited. Indeed, our inventive techniques can be applied to printing processes using more than four inks with the K channel canceling the three or more color channels. Similarly, as shown above under the spot color discussion, our inventive techniques are also applicable to printing processes using less than four inks. Of course our techniques can be used with a variety of printing techniques, including offset printing, dye diffusion thermal transfer (D2T2), other thermal transfers, process ink printing, etc., etc., etc.


The section headings in this application are provided merely for the reader's convenience, and provide no substantive limitations. Of course, the disclosure under one section heading may be readily combined with the disclosure under another section heading.


To provide a comprehensive disclosure without unduly lengthening this specification, the above-mentioned patents and patent applications are hereby incorporated by reference, along with U.S. Pat. No. 6,763,122. The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this application and the incorporated-by-reference patents/applications are also contemplated.


The above-described methods and functionality can be facilitated with computer executable software stored on computer readable media, such as electronic memory circuits, RAM, ROM, magnetic media, optical media, memory sticks, hard disks, removable media, etc., etc. Such software may be stored and executed on a general purpose computer, or on a server for distributed use. Data structures representing the various luminance values, out-of-phase embedded signals, embedded color planes, color signals, data signals, luminance signals, etc., may also be stored on such computer readable media. Also, instead of software, a hardware implementation, or a software-hardware implementation can be used.


In view of the wide variety of embodiments to which the principles and features discussed above can be applied, it should be apparent that the detailed embodiments are illustrative only and should not be taken as limiting the scope of the invention. Rather, we claim as our invention all such modifications as may come within the scope and spirit of the following claims and equivalents thereof.

Claims
  • 1. A method comprising: receiving captured imagery of media, wherein the captured imagery includes a plurality of image frames;for a first image frame, applying, using an electronic processor, a first watermark detector to search for a first imperceptible digital watermark hidden within the first image frame, wherein the first watermark detector uses a blue phase technique to search for the first imperceptible digital watermark, wherein the first image frame represents a blue channel, a green channel, and a red channel, wherein the first watermark detector uses a detection signal to search for the first imperceptible digital watermark, and wherein the detection signal is calculated using the blue channel, green channel, and the red channel;for a second image frame, applying, using the electronic processor, a second, different watermark detector to search for a second, different imperceptible watermark hidden within the second image frame;scaling the blue channel by a first factor;scaling a sum of the red channel and the green channel by a second factor; andsubtracting the scaled sum of the red channel and green channel from the scaled blue channel as part of the detection signal.
  • 2. The method of claim 1, wherein the first image frame and the second image frame each represent a same portion of the media.
  • 3. The method of claim 1, wherein the second image frame is different than the first image frame.
  • 4. The method of claim 1, wherein first factor is different than the second factor.
  • 5. The method of claim 4, further comprising shifting the detection signal by a value to avoid color saturation.
  • 6. The method of claim 1, wherein the first factor is greater than the second factor.
  • 7. The method of claim 1, wherein the second imperceptible watermark is an orientation component.
  • 8. A system comprising: an electronic processor configured to: receive captured imagery of media, wherein the captured imagery includes a plurality of image frames;for a first image frame, apply a first watermark detector to search for a first digital imperceptible watermark hidden within the first image frame, wherein the first watermark detector uses a blue phase technique to search for the first imperceptible digital watermark wherein the first image frame represents a blue channel, a green channel, and a red channel, wherein the first watermark detector uses a detection signal to search for the first imperceptible digital watermark, and wherein the detection signal is calculated using the blue channel, green channel, and the red channel;for a second image frame, apply a second, different watermark detector to search for a second, different imperceptible watermark hidden within the second image frame;scale the blue channel by a first factor; scale a sum of the red channel and the green channel by a second factor; andsubtract the scaled sum of the red channel and green channel from the scaled blue channel as part of the detection signal.
  • 9. The system of claim 8, wherein the electronic processor is further configured to shift the detection signal by a value to avoid color saturation.
  • 10. The system of claim 9, wherein first factor is different than the second factor.
  • 11. A non-transitory computer readable medium having instructions stored thereon, the instructions comprising: instructions to receive captured imagery of media, wherein the captured imagery includes a plurality of image frames;instructions to, for a first image frame, apply a first watermark detector to search for a first digital imperceptible watermark hidden within the first image frame, wherein the first watermark detector uses a blue phase technique to search for the first imperceptible digital watermark, wherein the first image frame represents a blue channel, a green channel, and a red channel, wherein the first watermark detector uses a detection signal to search for the first digital imperceptible watermark, and wherein the detection signal is calculated using the blue channel, green channel, and the red channel;instructions to, for a second image frame, apply a second, different watermark detector to search for a second, different imperceptible watermark hidden within the second image frame;instructions to scale the blue channel by a first factor;instructions to scale a sum of the red channel and the green channel by a second factor; andinstructions to subtract the scaled blue channel from the scaled sum of the red channel and green channel as part of the detection signal.
  • 12. The non-transitory computer readable medium of claim 11, wherein the instructions further comprise instructions to shift the detection signal by a value to avoid color saturation.
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS

This application is a Continuation of U.S. application Ser. No. 13/245,353, filed Sep. 26, 2011, which is a Divisional of U.S. application Ser. No. 12/814,218, filed Jun. 11, 2010, incorporated herein by reference in its entirety, which is a Continuation of U.S. application Ser. No. 11/153,901, filed Jun. 14, 2005, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 10/818,938, filed Apr. 5, 2004, incorporated herein by reference in its entirety, which is a Continuation of U.S. application Ser. No. 09/945,243, filed Aug. 31, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/933,863, filed Aug. 20, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/898,901, filed Jul. 2, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/553,084, filed Apr. 19, 2000 incorporated herein by reference in its entirety. U.S. application Ser. No. 11/153,901, filed Jun. 14, 2005, is a Continuation-In-Part of U.S. application Ser. No. 10/823,514, filed Apr. 12, 2004, incorporated herein by reference in its entirety, which is a Continuation of U.S. application Ser. No. 09/898,901, filed Jul. 2, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/553,084, filed Apr. 19, 2000, incorporated herein by reference in its. U.S. application Ser. No. 11/153,901, filed Jun. 14, 2005 is a Continuation-In-Part of U.S. application Ser. No. 10/115,582, filed Apr. 2, 2002, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/945,243, filed Aug. 31, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/933,863, filed Apr. 20, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/898,901, filed Jul. 2, 2001, incorporated herein by reference in its entirety, which is a Continuation-In-Part of U.S. application Ser. No. 09/553,084, filed Apr. 19, 2000, incorporated herein by reference in its entirety.

US Referenced Citations (329)
Number Name Date Kind
4504084 Jauch Mar 1985 A
4725462 Kimura Feb 1988 A
4739377 Allen Apr 1988 A
5051835 Bruehl et al. Sep 1991 A
5093147 Andrus et al. Mar 1992 A
5291243 Heckman et al. Mar 1994 A
5337361 Wang Aug 1994 A
5363212 Taniuchi et al. Nov 1994 A
5385371 Izawa Jan 1995 A
5444779 Daniele Aug 1995 A
5481377 Udagawa et al. Jan 1996 A
5493677 Balogh et al. Feb 1996 A
5502576 Ramsay et al. Mar 1996 A
5530751 Morris Jun 1996 A
5530759 Braudaway et al. Jun 1996 A
5557412 Saito et al. Sep 1996 A
5568555 Shamir Oct 1996 A
5572433 Falconer et al. Nov 1996 A
5617119 Briggs et al. Apr 1997 A
5621810 Suzuki et al. Apr 1997 A
5636874 Singer Jun 1997 A
5646997 Barton Jul 1997 A
5652626 Kawakami et al. Jul 1997 A
5659628 Tachikawa et al. Aug 1997 A
5659726 Sandford et al. Aug 1997 A
5661574 Kawana Aug 1997 A
5664018 Leighton Sep 1997 A
5687236 Moskowitz et al. Nov 1997 A
5689623 Pinard Nov 1997 A
5696594 Saito et al. Dec 1997 A
5721788 Powell et al. Feb 1998 A
5729741 Liaguno et al. Mar 1998 A
5748763 Rhoads May 1998 A
5760386 Ward Jun 1998 A
5787186 Schroeder Jul 1998 A
5788285 Wicker Aug 1998 A
5790693 Graves et al. Aug 1998 A
5790703 Wang Aug 1998 A
5809139 Girod et al. Sep 1998 A
5822436 Rhoads Oct 1998 A
5825892 Braudaway et al. Oct 1998 A
5832186 Kawana Nov 1998 A
5838814 Moore Nov 1998 A
5841491 D'Alfonso et al. Nov 1998 A
5862218 Steinberg Jan 1999 A
5862260 Rhoads Jan 1999 A
5875249 Mintzer et al. Feb 1999 A
5893101 Balogh et al. Apr 1999 A
5905800 Moskowitz et al. May 1999 A
5905819 Daly May 1999 A
5915027 Cox et al. Jun 1999 A
5919730 Gasper et al. Jul 1999 A
5930369 Cox et al. Jul 1999 A
5933798 Linnartz Aug 1999 A
5946414 Cass et al. Aug 1999 A
5951055 Mowry, Jr. Sep 1999 A
5960081 Vynne et al. Sep 1999 A
5960103 Graves et al. Sep 1999 A
5974548 Adams Oct 1999 A
5978013 Jones et al. Nov 1999 A
6045656 Foster et al. Apr 2000 A
6046808 Fateley Apr 2000 A
6054021 Kurrle et al. Apr 2000 A
6081827 Reber Jun 2000 A
6094483 Fridrich et al. Jul 2000 A
6095566 Yamamoto et al. Aug 2000 A
6104812 Koltai et al. Aug 2000 A
6122403 Rhoads Sep 2000 A
6136752 Paz-Pujalt et al. Oct 2000 A
6185312 Nakamura et al. Feb 2001 B1
6185683 Ginter et al. Feb 2001 B1
6192138 Yamadaji Feb 2001 B1
6201879 Bender et al. Mar 2001 B1
6229924 Rhoads et al. May 2001 B1
6233347 Chen et al. May 2001 B1
6233684 Stefik et al. May 2001 B1
6234537 Gutmann et al. May 2001 B1
6246777 Agarwal et al. Jun 2001 B1
6256398 Chang Jul 2001 B1
6268866 Shibata Jul 2001 B1
6272176 Srinivasan Aug 2001 B1
6272248 Saitoh et al. Aug 2001 B1
6272634 Tewfik et al. Aug 2001 B1
6278792 Cox et al. Aug 2001 B1
6281165 Cranford Aug 2001 B1
6285776 Rhoads Sep 2001 B1
6304345 Patton et al. Oct 2001 B1
6307949 Rhoads Oct 2001 B1
6311214 Rhoads Oct 2001 B1
6314192 Chen et al. Nov 2001 B1
6320675 Sakaki et al. Nov 2001 B1
6332031 Rhoads et al. Dec 2001 B1
6332194 Bloom et al. Dec 2001 B1
6334187 Kadono Dec 2001 B1
6356363 Cooper et al. Mar 2002 B1
6373965 Liang Apr 2002 B1
6381341 Rhoads Apr 2002 B1
6385329 Sharma et al. May 2002 B1
6390362 Martin May 2002 B1
6394358 Thaxton et al. May 2002 B1
6404926 Miyahara et al. Jun 2002 B1
6408082 Rhoads et al. Jun 2002 B1
6418232 Nakano et al. Jul 2002 B1
6421070 Ramos et al. Jul 2002 B1
6424725 Rhoads et al. Jul 2002 B1
6425081 Iwamura Jul 2002 B1
6427020 Rhoads Jul 2002 B1
6438251 Yamaguchi Aug 2002 B1
6456726 Yu Sep 2002 B1
6481753 Van Boom et al. Nov 2002 B2
6504941 Wong Jan 2003 B2
6516079 Rhoads et al. Feb 2003 B1
6522770 Seder et al. Feb 2003 B1
6535617 Hannigan et al. Mar 2003 B1
6542927 Rhoads Apr 2003 B2
6553129 Rhoads Apr 2003 B1
6563936 Brill et al. May 2003 B2
6567533 Rhoads May 2003 B1
6580808 Rhoads Jun 2003 B2
6590996 Reed et al. Jul 2003 B1
6611607 Davis et al. Aug 2003 B1
6614914 Rhoads et al. Sep 2003 B1
6636615 Rhoads et al. Oct 2003 B1
6647128 Rhoads Nov 2003 B1
6647130 Rhoads Nov 2003 B2
6650761 Rodriguez et al. Nov 2003 B1
6671376 Koto Dec 2003 B1
6674802 Knee et al. Jan 2004 B2
6681028 Rodriguez et al. Jan 2004 B2
6681029 Rhoads Jan 2004 B1
6694042 Seder et al. Feb 2004 B2
6694043 Seder et al. Feb 2004 B2
6700990 Rhoads Mar 2004 B1
6700995 Reed Mar 2004 B2
6704869 Rhoads et al. Mar 2004 B2
6718046 Reed et al. Apr 2004 B2
6718047 Rhoads Apr 2004 B2
6721440 Reed et al. Apr 2004 B2
6760463 Rhoads Jul 2004 B2
6763122 Rodriguez et al. Jul 2004 B1
6763123 Reed et al. Jul 2004 B2
6763124 Alattar et al. Jul 2004 B2
6768809 Rhoads et al. Jul 2004 B2
6775392 Rhoads Aug 2004 B1
6785815 Serret-Avila et al. Aug 2004 B1
6798894 Rhoads Sep 2004 B2
6804377 Reed et al. Oct 2004 B2
6813366 Rhoads Nov 2004 B1
6879701 Rhoads Apr 2005 B1
6891959 Reed et al. May 2005 B2
6912295 Reed et al. Jun 2005 B2
6917724 Andrew et al. Jul 2005 B2
6920232 Rhoads Jul 2005 B2
6940993 Jones Sep 2005 B2
6947571 Rhoads et al. Sep 2005 B1
6968072 Tian Nov 2005 B1
6975746 Davis et al. Dec 2005 B2
6988202 Rhoads et al. Jan 2006 B1
6996252 Reed et al. Feb 2006 B2
7003731 Rhoads et al. Feb 2006 B1
7006662 Alattar Feb 2006 B2
7024016 Rhoads et al. Apr 2006 B2
7027614 Reed Apr 2006 B2
7035427 Rhoads Apr 2006 B2
7044395 Davis et al. May 2006 B1
7051086 Rhoads et al. May 2006 B2
7054465 Rhoads May 2006 B2
7062069 Rhoads Jun 2006 B2
7072487 Reed et al. Jul 2006 B2
7095871 Jones et al. Aug 2006 B2
7111170 Rhoades et al. Sep 2006 B2
7113614 Rhoads Sep 2006 B2
7139408 Rhoads et al. Nov 2006 B2
7158654 Rhoads Jan 2007 B2
7164780 Brundage et al. Jan 2007 B2
7171016 Rhoads Jan 2007 B1
7174031 Rhoads et al. Feb 2007 B2
7177443 Rhoads Feb 2007 B2
7213757 Jones et al. May 2007 B2
7224819 Levy et al. May 2007 B2
7248717 Rhoads Jul 2007 B2
7261612 Hannigan et al. Aug 2007 B1
7305104 Carr et al. Dec 2007 B2
7308110 Rhoads Dec 2007 B2
7313251 Rhoads Dec 2007 B2
7319775 Sharma et al. Jan 2008 B2
7330564 Brundage et al. Feb 2008 B2
7369678 Rhoads May 2008 B2
7377421 Rhoads May 2008 B2
7391880 Reed et al. Jun 2008 B2
7397584 Harrington Jul 2008 B2
7406214 Rhoads et al. Jul 2008 B2
7424131 Adnan M et al. Sep 2008 B2
7427030 Jones et al. Sep 2008 B2
7433491 Rhoads Oct 2008 B2
7444000 Rhoads Oct 2008 B2
7444392 Rhoads et al. Oct 2008 B2
7450734 Rodriguez et al. Nov 2008 B2
7460726 Levy et al. Dec 2008 B2
7466840 Rhoads Dec 2008 B2
7486799 Rhoads Feb 2009 B2
7486819 Subbotin Feb 2009 B2
7502759 Hannigan et al. Mar 2009 B2
7508955 Carr et al. Mar 2009 B2
7515733 Rhoads Apr 2009 B2
7536034 Rhoads et al. May 2009 B2
7537170 Reed et al. May 2009 B2
7545952 Brundage et al. Jun 2009 B2
7564992 Rhoads Jul 2009 B2
RE40919 Rhoads Sep 2009 E
7602978 Levy et al. Oct 2009 B2
7628320 Rhoads Dec 2009 B2
7643649 Davis et al. Jan 2010 B2
7650009 Rhoads Jan 2010 B2
7653210 Rhoads Jan 2010 B2
7657058 Sharma Feb 2010 B2
7685426 Ramos et al. Mar 2010 B2
7693300 Reed et al. Apr 2010 B2
7697719 Rhoads Apr 2010 B2
7711143 Rhoads May 2010 B2
7738673 Reed Jun 2010 B2
7747038 Rhoads Jun 2010 B2
7751588 Rhoads Jul 2010 B2
7751596 Rhoads Jul 2010 B2
7756290 Rhoads Jul 2010 B2
7760905 Rhoads et al. Jul 2010 B2
7762468 Reed et al. Jul 2010 B2
7787653 Rhoads Aug 2010 B2
7792325 Rhoads et al. Sep 2010 B2
7822225 Alattar Oct 2010 B2
7837094 Rhoads Nov 2010 B2
20010014169 Liang Aug 2001 A1
20010021144 Oshima et al. Sep 2001 A1
20010024510 Iwamura Sep 2001 A1
20010026377 Ikegami Oct 2001 A1
20010028727 Naito et al. Oct 2001 A1
20010030759 Hayashi et al. Oct 2001 A1
20010030761 Ideyahma Oct 2001 A1
20010033674 Chen et al. Oct 2001 A1
20010034705 Rhoads et al. Oct 2001 A1
20010037313 Lofgren et al. Nov 2001 A1
20010037455 Lawandy et al. Nov 2001 A1
20010040980 Yamaguchi Nov 2001 A1
20010052076 Kadono Dec 2001 A1
20010053235 Sato Dec 2001 A1
20010054644 Liang Dec 2001 A1
20010055407 Rhoads Dec 2001 A1
20020009208 Alattar et al. Jan 2002 A1
20020015509 Nakamura et al. Feb 2002 A1
20020018879 Barnhart et al. Feb 2002 A1
20020021824 Reed et al. Feb 2002 A1
20020023218 Lawandy et al. Feb 2002 A1
20020027612 Brill et al. Mar 2002 A1
20020027674 Tokunaga et al. Mar 2002 A1
20020031241 Kawaguchi et al. Mar 2002 A1
20020033844 Levy et al. Mar 2002 A1
20020040433 Kondo Apr 2002 A1
20020057431 Fateley et al. May 2002 A1
20020059162 Shinoda May 2002 A1
20020061121 Rhoads et al. May 2002 A1
20020061122 Fujihara May 2002 A1
20020062442 Kurahashi May 2002 A1
20020064298 Rhoads et al. May 2002 A1
20020067844 Reed et al. Jun 2002 A1
20020068987 Hars Jun 2002 A1
20020073317 Hars Jun 2002 A1
20020080396 Silverbrook et al. Jun 2002 A1
20020083123 Freedman et al. Jun 2002 A1
20020097873 Petrovic Jul 2002 A1
20020099943 Rodriguez et al. Jul 2002 A1
20020101597 Hoover Aug 2002 A1
20020118381 Shirai et al. Aug 2002 A1
20020118394 McKinley et al. Aug 2002 A1
20020122568 Zhao Sep 2002 A1
20020131076 Davis Sep 2002 A1
20020141310 Stephany Oct 2002 A1
20020150246 Ogino Oct 2002 A1
20020153661 Brooks et al. Oct 2002 A1
20020163633 Cohen Nov 2002 A1
20020163671 Takaragi Nov 2002 A1
20020164051 Reed et al. Nov 2002 A1
20020176003 Seder et al. Nov 2002 A1
20020176600 Rhoads et al. Nov 2002 A1
20020178368 Yin et al. Nov 2002 A1
20020186886 Rhoads Dec 2002 A1
20020196272 Ramos et al. Dec 2002 A1
20030005304 Lawandy et al. Jan 2003 A1
20030012562 Lawandy et al. Jan 2003 A1
20030032033 Anglin et al. Feb 2003 A1
20030040957 Rodriguez et al. Feb 2003 A1
20030056104 Carr et al. Mar 2003 A1
20030105730 Davis et al. Jun 2003 A1
20030130954 Carr et al. Jul 2003 A1
20040005093 Rhoads Jan 2004 A1
20040190750 Rodriguez et al. Sep 2004 A1
20040201696 Yoda Oct 2004 A1
20040240704 Reed Dec 2004 A1
20040264733 Rhoads et al. Dec 2004 A1
20050041835 Reed et al. Feb 2005 A1
20050058318 Rhoads Mar 2005 A1
20050192933 Rhoads et al. Sep 2005 A1
20060013435 Rhoads Jan 2006 A1
20060041591 Rhoads Feb 2006 A1
20060062428 Alattar et al. Mar 2006 A1
20060251291 Rhoads Nov 2006 A1
20070055884 Rhoads Mar 2007 A1
20070108287 Davis et al. May 2007 A1
20070154064 Rhoads et al. Jul 2007 A1
20070276841 Rhoads et al. Nov 2007 A1
20070276928 Rhoads et al. Nov 2007 A1
20080121728 Rodriguez May 2008 A1
20080133555 Rhoads et al. Jun 2008 A1
20080292134 Sharma et al. Nov 2008 A1
20090012944 Rodriguez et al. Jan 2009 A1
20090116687 Rhoads et al. May 2009 A1
20090125475 Rhoads et al. May 2009 A1
20090232352 Carr et al. Sep 2009 A1
20090286572 Rhoads et al. Nov 2009 A1
20090290754 Rhoads Nov 2009 A1
20100027837 Levy et al. Feb 2010 A1
20100045816 Rhoads Feb 2010 A1
20100062819 Hannigan et al. Mar 2010 A1
20100094639 Rhoads Apr 2010 A1
20100142749 Ellingson et al. Jun 2010 A1
20100172540 Davis et al. Jul 2010 A1
20100198941 Rhoads Aug 2010 A1
20110007936 Rhoads Jan 2011 A1
20110026777 Rhoads et al. Feb 2011 A1
20110051998 Rhoads Mar 2011 A1
Foreign Referenced Citations (32)
Number Date Country
2943436 May 1981 DE
590884 Apr 1994 EP
642060 Mar 1995 EP
705022 Apr 1996 EP
991047 Apr 2000 EP
1077570 Feb 2001 EP
1137244 Sep 2001 EP
1152592 Nov 2001 EP
1173001 Jan 2002 EP
1209897 May 2002 EP
1534403 Dec 1976 GB
2360659 Sep 2001 GB
H07093567 Apr 1995 JP
H07108786 Apr 1995 JP
WO9513597 May 1995 WO
WO9603286 Feb 1996 WO
WO 9636163 Nov 1996 WO
WO 9910837 Mar 1999 WO
WO 0016546 Mar 2000 WO
WO0105075 Jan 2001 WO
WO0108405 Feb 2001 WO
WO0139121 May 2001 WO
WO0172030 Sep 2001 WO
WO0173997 Oct 2001 WO
WO0197128 Dec 2001 WO
WO0197175 Dec 2001 WO
WO0217631 Feb 2002 WO
WO0219269 Mar 2002 WO
WO0221846 Mar 2002 WO
WO0223481 Mar 2002 WO
WO02087250 Oct 2002 WO
WO0188883 Nov 2002 WO
Non-Patent Literature Citations (35)
Entry
U.S. Appl. No. 09/343,101, filed Jun. 29, 1999, Bruce L. Davis, et al.
U.S. Appl. No. 09/343,104, filed Jun. 29, 1999, Tony F. Rodriguez, et al.
U.S. Appl. No. 09/413,117, filed Oct. 6, 1999, Geoffrey B. Rhoads.
U.S. Appl. No. 09/482,749, filed Jan. 13, 2000, Geoffrey B. Rhoads.
U.S. Appl. No. 09/507,096, filed Feb. 17, 2000, Geoffrey B. Rhoads, et al.
U.S. Appl. No. 09/538,493, filed Mar. 30, 2000, Geoffrey B. Rhoads.
U.S. Appl. No. 09/552,998, filed Apr. 19, 2000, Tony F. Rodriguez, et al.
U.S. Appl. No. 09/567,405, filed May 8, 2000, Geoffrey B. Rhoads, et al.
U.S. Appl. No. 09/629,649, filed Aug. 1, 2000, J. Scott Carr, et al.
U.S. Appl. No. 09/633,587, filed Aug. 7, 2000, Geoffrey B. Rhoads, et al.
U.S. Appl. No. 09/689,289, filed Oct. 11, 2000, Geoffrey B. Rhoads, et al.
U.S. Appl. No. 09/697,009, filed Oct. 25, 2000, Bruce L. Davis, et al.
U.S. Appl. No. 09/697,015, filed Oct. 25, 2000, Bruce L Davis, et al.
U.S. Appl. No. 12/912,461, filed Oct. 26, 2010, Adnan M. Alattar.
U.S. Appl. No. 12/953,190, filed Nov. 23, 2010, Geoffrey B. Rhoads.
Alattar, “‘Smart Images’Using Digimarc's Watermarking Technology,” IS&T/SPIE's 12.sup.th Int. Symposium on Electronic Imaging, San Jose, CA, Jan. 25, 2000, vol. 3971, No. 25, 10 pages.
Battialo et al., “Robust Watermarking for Images Based on Color Manipulation,” IH/99 LNCS 1768, pp. 302-317, 2000.
Bender et al., “Applications for Data Hiding,” IBM Systems Journal, vol. 39, Nos. 3&4, 2000, pp. 547-568.
Bors et al., “Image Watermarking Using DCT Domain Constraints,” Proc. Int. Conf. on Image Processing, vol. 3, pp. 231-234.
Brownell, “Counterfeiters Dye Over Security Measures,” SPIE's OE Magazine, Sep. 2001, pp. 8-9.
Fleet et al., “Embedding Invisible Information in Color Images,” Proc. Int. Conf. on Image Processing, vol. 1, pp. 532-535, Oct. 1997.
Frequently Asked Questions About Digimarc Signature Technology, Aug. 1, 1995, http://www.digimarc.com, 9 pages.
Hartung et al., Digital Watermarking of Raw and Compressed Video, Proc. SPIE 2952, Digital Compression Technologies and Systems for Video Communications, Oct. 1996, pp. 205-213.
“Holographic signatures for digital images,” The Seybold Report on Desktop Publishing, Aug. 1995, one page.
Hunt, “The Reproduction of Colour in Photography, Printing & Television,” 1987, pp. 588-589 and Plate 35 (in color).
Koch et al., “Toward Robust and Hidden Image Copyright Labeling,” Proc. of 1995 IEEE Workshop on Nonlinear Signal and Image Processing, Jun. 20-22, 1995, 4 pages.
Kohda et al., “Digital Watermarking Through CDMA Channels Using Spread Spectrum Techniques,” 2000 IEEE, pp. 671-674.
Komatsu et al., “A Proposal on Digital Watermark in Document Image Communication and Its Application to Realizing a Signature,” Electronics and Communication in Japan, Part 1, vol. 73, No. 5, 1990, pp. 22-33.
Komatsu et al., “Authentication System Using Concealed Image in Telematics,” Memoir of the School of Science & Engineering, Waseda Univ., No. 52, 1988, pp. 45-60.
Kutter et al., “Digital Signature of Color Images Using Amplitude Modulation,” SPIE vol. 3022, 1997, pp. 518-526.
ORuanaidh et al, “Watermarking Digital Images for Copyright Protection,” http://www.kalman.mee.tod.le/people/ijr/eva_pao.html, Feb. 2, 1996, 6 pages.
Piva et al., “Exploiting the Cross-Correlation of RGB-Channels for Robust Watermarking of Color Images,” 1999 IEEE, pp. 306-310.
Vidal et al., “Non-Noticeable Information Embedding in Color Images: Marking and Detection,” IEEE (1999), pp. 293-297.
Voyatzis et al., “Embedding Robust Watermarks by Chaotic Mixing,” Digital Signal Processing Proceedings, IEEE Jul. 1997, pp. 213-216, vol. 1.
Wang et al., “Embedding Digital Watermarks in Halftone Screens,” Security and Watermarking of Multimedia Contents II, Proc. of SPIE vol. 3971 (2000), pp. 219-227.
Related Publications (1)
Number Date Country
20160048940 A1 Feb 2016 US
Divisions (1)
Number Date Country
Parent 12814218 Jun 2010 US
Child 13245353 US
Continuations (4)
Number Date Country
Parent 13245353 Sep 2011 US
Child 14924449 US
Parent 11153901 Jun 2005 US
Child 12814218 US
Parent 09945243 Aug 2001 US
Child 10818938 US
Parent 09898901 Jul 2001 US
Child 10823514 US
Continuation in Parts (11)
Number Date Country
Parent 10818938 Apr 2004 US
Child 11153901 US
Parent 09933863 Aug 2001 US
Child 09945243 US
Parent 09898901 Jul 2001 US
Child 09933863 US
Parent 09553084 Apr 2000 US
Child 09898901 US
Parent 10823514 Apr 2004 US
Child 11153901 Jun 2005 US
Parent 09553084 Apr 2000 US
Child 09898901 US
Parent 10115582 Apr 2002 US
Child 11153901 Jun 2005 US
Parent 09945243 Aug 2001 US
Child 10115582 US
Parent 09933863 Aug 2001 US
Child 09945243 US
Parent 09898901 Jul 2001 US
Child 09933863 US
Parent 09553084 Apr 2000 US
Child 09898901 US